Sample records for common modeling assumption

  1. Assessing the skill of hydrology models at simulating the water cycle in the HJ Andrews LTER: Assumptions, strengths and weaknesses

    EPA Science Inventory

    Simulated impacts of climate on hydrology can vary greatly as a function of the scale of the input data, model assumptions, and model structure. Four models are commonly used to simulate streamflow in model assumptions, and model structure. Four models are commonly used to simu...

  2. Model Considerations for Memory-based Automatic Music Transcription

    NASA Astrophysics Data System (ADS)

    Albrecht, Štěpán; Šmídl, Václav

    2009-12-01

    The problem of automatic music description is considered. The recorded music is modeled as a superposition of known sounds from a library weighted by unknown weights. Similar observation models are commonly used in statistics and machine learning. Many methods for estimation of the weights are available. These methods differ in the assumptions imposed on the weights. In Bayesian paradigm, these assumptions are typically expressed in the form of prior probability density function (pdf) on the weights. In this paper, commonly used assumptions about music signal are summarized and complemented by a new assumption. These assumptions are translated into pdfs and combined into a single prior density using combination of pdfs. Validity of the model is tested in simulation using synthetic data.

  3. Variability of hemodynamic parameters using the common viscosity assumption in a computational fluid dynamics analysis of intracranial aneurysms.

    PubMed

    Suzuki, Takashi; Takao, Hiroyuki; Suzuki, Takamasa; Suzuki, Tomoaki; Masuda, Shunsuke; Dahmani, Chihebeddine; Watanabe, Mitsuyoshi; Mamori, Hiroya; Ishibashi, Toshihiro; Yamamoto, Hideki; Yamamoto, Makoto; Murayama, Yuichi

    2017-01-01

    In most simulations of intracranial aneurysm hemodynamics, blood is assumed to be a Newtonian fluid. However, it is a non-Newtonian fluid, and its viscosity profile differs among individuals. Therefore, the common viscosity assumption may not be valid for all patients. This study aims to test the suitability of the common viscosity assumption. Blood viscosity datasets were obtained from two healthy volunteers. Three simulations were performed for three different-sized aneurysms, two using measured value-based non-Newtonian models and one using a Newtonian model. The parameters proposed to predict an aneurysmal rupture obtained using the non-Newtonian models were compared with those obtained using the Newtonian model. The largest difference (25%) in the normalized wall shear stress (NWSS) was observed in the smallest aneurysm. Comparing the difference ratio to the NWSS with the Newtonian model between the two Non-Newtonian models, the difference of the ratio was 17.3%. Irrespective of the aneurysmal size, computational fluid dynamics simulations with either the common Newtonian or non-Newtonian viscosity assumption could lead to values different from those of the patient-specific viscosity model for hemodynamic parameters such as NWSS.

  4. Neural models on temperature regulation for cold-stressed animals

    NASA Technical Reports Server (NTRS)

    Horowitz, J. M.

    1975-01-01

    The present review evaluates several assumptions common to a variety of current models for thermoregulation in cold-stressed animals. Three areas covered by the models are discussed: signals to and from the central nervous system (CNS), portions of the CNS involved, and the arrangement of neurons within networks. Assumptions in each of these categories are considered. The evaluation of the models is based on the experimental foundations of the assumptions. Regions of the nervous system concerned here include the hypothalamus, the skin, the spinal cord, the hippocampus, and the septal area of the brain.

  5. Relative Performance of Rescaling and Resampling Approaches to Model Chi Square and Parameter Standard Error Estimation in Structural Equation Modeling.

    ERIC Educational Resources Information Center

    Nevitt, Johnathan; Hancock, Gregory R.

    Though common structural equation modeling (SEM) methods are predicated upon the assumption of multivariate normality, applied researchers often find themselves with data clearly violating this assumption and without sufficient sample size to use distribution-free estimation methods. Fortunately, promising alternatives are being integrated into…

  6. Life Support Baseline Values and Assumptions Document

    NASA Technical Reports Server (NTRS)

    Anderson, Molly S.; Ewert, Michael K.; Keener, John F.

    2018-01-01

    The Baseline Values and Assumptions Document (BVAD) provides analysts, modelers, and other life support researchers with a common set of values and assumptions which can be used as a baseline in their studies. This baseline, in turn, provides a common point of origin from which many studies in the community may depart, making research results easier to compare and providing researchers with reasonable values to assume for areas outside their experience. This document identifies many specific physical quantities that define life support systems, serving as a general reference for spacecraft life support system technology developers.

  7. Life Support Baseline Values and Assumptions Document

    NASA Technical Reports Server (NTRS)

    Anderson, Molly S.; Ewert, Michael K.; Keener, John F.; Wagner, Sandra A.

    2015-01-01

    The Baseline Values and Assumptions Document (BVAD) provides analysts, modelers, and other life support researchers with a common set of values and assumptions which can be used as a baseline in their studies. This baseline, in turn, provides a common point of origin from which many studies in the community may depart, making research results easier to compare and providing researchers with reasonable values to assume for areas outside their experience. With the ability to accurately compare different technologies' performance for the same function, managers will be able to make better decisions regarding technology development.

  8. A Conditional Joint Modeling Approach for Locally Dependent Item Responses and Response Times

    ERIC Educational Resources Information Center

    Meng, Xiang-Bin; Tao, Jian; Chang, Hua-Hua

    2015-01-01

    The assumption of conditional independence between the responses and the response times (RTs) for a given person is common in RT modeling. However, when the speed of a test taker is not constant, this assumption will be violated. In this article we propose a conditional joint model for item responses and RTs, which incorporates a covariance…

  9. Lotka-Volterra pairwise modeling fails to capture diverse pairwise microbial interactions

    PubMed Central

    Momeni, Babak; Xie, Li; Shou, Wenying

    2017-01-01

    Pairwise models are commonly used to describe many-species communities. In these models, an individual receives additive fitness effects from pairwise interactions with each species in the community ('additivity assumption'). All pairwise interactions are typically represented by a single equation where parameters reflect signs and strengths of fitness effects ('universality assumption'). Here, we show that a single equation fails to qualitatively capture diverse pairwise microbial interactions. We build mechanistic reference models for two microbial species engaging in commonly-found chemical-mediated interactions, and attempt to derive pairwise models. Different equations are appropriate depending on whether a mediator is consumable or reusable, whether an interaction is mediated by one or more mediators, and sometimes even on quantitative details of the community (e.g. relative fitness of the two species, initial conditions). Our results, combined with potential violation of the additivity assumption in many-species communities, suggest that pairwise modeling will often fail to predict microbial dynamics. DOI: http://dx.doi.org/10.7554/eLife.25051.001 PMID:28350295

  10. Validation of abundance estimates from mark-recapture and removal techniques for rainbow trout captured by electrofishing in small streams

    Treesearch

    Amanda E. Rosenberger; Jason B. Dunham

    2005-01-01

    Estimation of fish abundance in streams using the removal model or the Lincoln–Peterson mark–recapture model is a common practice in fisheries. These models produce misleading results if their assumptions are violated. We evaluated the assumptions of these two models via electrofishing of rainbow trout Oncorhynchus mykiss in central Idaho streams....

  11. Development and Validation of Methodology to Model Flow in Ventilation Systems Commonly Found in Nuclear Facilities. Phase I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strons, Philip; Bailey, James L.; Davis, John

    2016-03-01

    In this work, we apply the CFD in modeling airflow and particulate transport. This modeling is then compared to field validation studies to both inform and validate the modeling assumptions. Based on the results of field tests, modeling assumptions and boundary conditions are refined and the process is repeated until the results are found to be reliable with a high level of confidence.

  12. Beware the black box: investigating the sensitivity of FEA simulations to modelling factors in comparative biomechanics.

    PubMed

    Walmsley, Christopher W; McCurry, Matthew R; Clausen, Phillip D; McHenry, Colin R

    2013-01-01

    Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be 'reasonable' are often assumed to have little influence on the results and their interpretation. HERE WE REPORT AN EXTENSIVE SENSITIVITY ANALYSIS WHERE HIGH RESOLUTION FINITE ELEMENT (FE) MODELS OF MANDIBLES FROM SEVEN SPECIES OF CROCODILE WERE ANALYSED UNDER LOADS TYPICAL FOR COMPARATIVE ANALYSIS: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results.

  13. Beware the black box: investigating the sensitivity of FEA simulations to modelling factors in comparative biomechanics

    PubMed Central

    McCurry, Matthew R.; Clausen, Phillip D.; McHenry, Colin R.

    2013-01-01

    Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be ‘reasonable’ are often assumed to have little influence on the results and their interpretation. Here we report an extensive sensitivity analysis where high resolution finite element (FE) models of mandibles from seven species of crocodile were analysed under loads typical for comparative analysis: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results. PMID:24255817

  14. Multilevel models for estimating incremental net benefits in multinational studies.

    PubMed

    Grieve, Richard; Nixon, Richard; Thompson, Simon G; Cairns, John

    2007-08-01

    Multilevel models (MLMs) have been recommended for estimating incremental net benefits (INBs) in multicentre cost-effectiveness analysis (CEA). However, these models have assumed that the INBs are exchangeable and that there is a common variance across all centres. This paper examines the plausibility of these assumptions by comparing various MLMs for estimating the mean INB in a multinational CEA. The results showed that the MLMs that assumed the INBs were exchangeable and had a common variance led to incorrect inferences. The MLMs that included covariates to allow for systematic differences across the centres, and estimated different variances in each centre, made more plausible assumptions, fitted the data better and led to more appropriate inferences. We conclude that the validity of assumptions underlying MLMs used in CEA need to be critically evaluated before reliable conclusions can be drawn. Copyright 2006 John Wiley & Sons, Ltd.

  15. Dissecting effects of complex mixtures: who's afraid of informative priors?

    PubMed

    Thomas, Duncan C; Witte, John S; Greenland, Sander

    2007-03-01

    Epidemiologic studies commonly investigate multiple correlated exposures, which are difficult to analyze appropriately. Hierarchical modeling provides a promising approach for analyzing such data by adding a higher-level structure or prior model for the exposure effects. This prior model can incorporate additional information on similarities among the correlated exposures and can be parametric, semiparametric, or nonparametric. We discuss the implications of applying these models and argue for their expanded use in epidemiology. While a prior model adds assumptions to the conventional (first-stage) model, all statistical methods (including conventional methods) make strong intrinsic assumptions about the processes that generated the data. One should thus balance prior modeling assumptions against assumptions of validity, and use sensitivity analyses to understand their implications. In doing so - and by directly incorporating into our analyses information from other studies or allied fields - we can improve our ability to distinguish true causes of disease from noise and bias.

  16. Modeling Local Item Dependence Due to Common Test Format with a Multidimensional Rasch Model

    ERIC Educational Resources Information Center

    Baghaei, Purya; Aryadoust, Vahid

    2015-01-01

    Research shows that test method can exert a significant impact on test takers' performance and thereby contaminate test scores. We argue that common test method can exert the same effect as common stimuli and violate the conditional independence assumption of item response theory models because, in general, subsets of items which have a shared…

  17. Non-stationary noise estimation using dictionary learning and Gaussian mixture models

    NASA Astrophysics Data System (ADS)

    Hughes, James M.; Rockmore, Daniel N.; Wang, Yang

    2014-02-01

    Stationarity of the noise distribution is a common assumption in image processing. This assumption greatly simplifies denoising estimators and other model parameters and consequently assuming stationarity is often a matter of convenience rather than an accurate model of noise characteristics. The problematic nature of this assumption is exacerbated in real-world contexts, where noise is often highly non-stationary and can possess time- and space-varying characteristics. Regardless of model complexity, estimating the parameters of noise dis- tributions in digital images is a difficult task, and estimates are often based on heuristic assumptions. Recently, sparse Bayesian dictionary learning methods were shown to produce accurate estimates of the level of additive white Gaussian noise in images with minimal assumptions. We show that a similar model is capable of accu- rately modeling certain kinds of non-stationary noise processes, allowing for space-varying noise in images to be estimated, detected, and removed. We apply this modeling concept to several types of non-stationary noise and demonstrate the model's effectiveness on real-world problems, including denoising and segmentation of images according to noise characteristics, which has applications in image forensics.

  18. AQUATIC TOXICITY MODE OF ACTION STUDIES APPLIED TO QSAR DEVELOPMENT

    EPA Science Inventory

    A series of QSAR models for predicting fish acute lethality were developed using systematically collected data on more than 600 chemicals. These models were developed based on the assumption that chemicals producing toxicity through a common mechanism will have commonality in the...

  19. Modelling heterogeneity variances in multiple treatment comparison meta-analysis--are informative priors the better solution?

    PubMed

    Thorlund, Kristian; Thabane, Lehana; Mills, Edward J

    2013-01-11

    Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the 'common variance' assumption). This approach 'borrows strength' for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment rankings were similar among the novel approaches, but considerably different when compared with the homogenous variance approach. MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice.

  20. Shared additive genetic influences on DSM-IV criteria for alcohol dependence in subjects of European ancestry.

    PubMed

    Palmer, Rohan H C; McGeary, John E; Heath, Andrew C; Keller, Matthew C; Brick, Leslie A; Knopik, Valerie S

    2015-12-01

    Genetic studies of alcohol dependence (AD) have identified several candidate loci and genes, but most observed effects are small and difficult to reproduce. A plausible explanation for inconsistent findings may be a violation of the assumption that genetic factors contributing to each of the seven DSM-IV criteria point to a single underlying dimension of risk. Given that recent twin studies suggest that the genetic architecture of AD is complex and probably involves multiple discrete genetic factors, the current study employed common single nucleotide polymorphisms in two multivariate genetic models to examine the assumption that the genetic risk underlying DSM-IV AD is unitary. AD symptoms and genome-wide single nucleotide polymorphism (SNP) data from 2596 individuals of European descent from the Study of Addiction: Genetics and Environment were analyzed using genomic-relatedness-matrix restricted maximum likelihood. DSM-IV AD symptom covariance was described using two multivariate genetic factor models. Common SNPs explained 30% (standard error=0.136, P=0.012) of the variance in AD diagnosis. Additive genetic effects varied across AD symptoms. The common pathway model approach suggested that symptoms could be described by a single latent variable that had a SNP heritability of 31% (0.130, P=0.008). Similarly, the exploratory genetic factor model approach suggested that the genetic variance/covariance across symptoms could be represented by a single genetic factor that accounted for at least 60% of the genetic variance in any one symptom. Additive genetic effects on DSM-IV alcohol dependence criteria overlap. The assumption of common genetic effects across alcohol dependence symptoms appears to be a valid assumption. © 2015 Society for the Study of Addiction.

  1. Regression assumptions in clinical psychology research practice-a systematic review of common misconceptions.

    PubMed

    Ernst, Anja F; Albers, Casper J

    2017-01-01

    Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our systematic literature review investigated employment and reporting of assumption checks in twelve clinical psychology journals. Findings indicate that normality of the variables themselves, rather than of the errors, was wrongfully held for a necessary assumption in 4% of papers that use regression. Furthermore, 92% of all papers using linear regression were unclear about their assumption checks, violating APA-recommendations. This paper appeals for a heightened awareness for and increased transparency in the reporting of statistical assumption checking.

  2. Regression assumptions in clinical psychology research practice—a systematic review of common misconceptions

    PubMed Central

    Ernst, Anja F.

    2017-01-01

    Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our systematic literature review investigated employment and reporting of assumption checks in twelve clinical psychology journals. Findings indicate that normality of the variables themselves, rather than of the errors, was wrongfully held for a necessary assumption in 4% of papers that use regression. Furthermore, 92% of all papers using linear regression were unclear about their assumption checks, violating APA-recommendations. This paper appeals for a heightened awareness for and increased transparency in the reporting of statistical assumption checking. PMID:28533971

  3. Pendulum Motion and Differential Equations

    ERIC Educational Resources Information Center

    Reid, Thomas F.; King, Stephen C.

    2009-01-01

    A common example of real-world motion that can be modeled by a differential equation, and one easily understood by the student, is the simple pendulum. Simplifying assumptions are necessary for closed-form solutions to exist, and frequently there is little discussion of the impact if those assumptions are not met. This article presents a…

  4. Common Cause Failure Modeling: Aerospace Versus Nuclear

    NASA Technical Reports Server (NTRS)

    Stott, James E.; Britton, Paul; Ring, Robert W.; Hark, Frank; Hatfield, G. Spencer

    2010-01-01

    Aggregate nuclear plant failure data is used to produce generic common-cause factors that are specifically for use in the common-cause failure models of NUREG/CR-5485. Furthermore, the models presented in NUREG/CR-5485 are specifically designed to incorporate two significantly distinct assumptions about the methods of surveillance testing from whence this aggregate failure data came. What are the implications of using these NUREG generic factors to model the common-cause failures of aerospace systems? Herein, the implications of using the NUREG generic factors in the modeling of aerospace systems are investigated in detail and strong recommendations for modeling the common-cause failures of aerospace systems are given.

  5. A dynamic model of some malaria-transmitting anopheline mosquitoes of the Afrotropical region. I. Model description and sensitivity analysis

    PubMed Central

    2013-01-01

    Background Most of the current biophysical models designed to address the large-scale distribution of malaria assume that transmission of the disease is independent of the vector involved. Another common assumption in these type of model is that the mortality rate of mosquitoes is constant over their life span and that their dispersion is negligible. Mosquito models are important in the prediction of malaria and hence there is a need for a realistic representation of the vectors involved. Results We construct a biophysical model including two competing species, Anopheles gambiae s.s. and Anopheles arabiensis. Sensitivity analysis highlight the importance of relative humidity and mosquito size, the initial conditions and dispersion, and a rarely used parameter, the probability of finding blood. We also show that the assumption of exponential mortality of adult mosquitoes does not match the observed data, and suggest that an age dimension can overcome this problem. Conclusions This study highlights some of the assumptions commonly used when constructing mosquito-malaria models and presents a realistic model of An. gambiae s.s. and An. arabiensis and their interaction. This new mosquito model, OMaWa, can improve our understanding of the dynamics of these vectors, which in turn can be used to understand the dynamics of malaria. PMID:23342980

  6. Modelling heterogeneity variances in multiple treatment comparison meta-analysis – Are informative priors the better solution?

    PubMed Central

    2013-01-01

    Background Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the ‘common variance’ assumption). This approach ‘borrows strength’ for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. Methods In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. Results In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment rankings were similar among the novel approaches, but considerably different when compared with the homogenous variance approach. Conclusions MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice. PMID:23311298

  7. Temporal Clustering and Sequencing in Short-Term Memory and Episodic Memory

    ERIC Educational Resources Information Center

    Farrell, Simon

    2012-01-01

    A model of short-term memory and episodic memory is presented, with the core assumptions that (a) people parse their continuous experience into episodic clusters and (b) items are clustered together in memory as episodes by binding information within an episode to a common temporal context. Along with the additional assumption that information…

  8. Testing for Measurement and Structural Equivalence in Large-Scale Cross-Cultural Studies: Addressing the Issue of Nonequivalence

    ERIC Educational Resources Information Center

    Byrne, Barbara M.; van de Vijver, Fons J. R.

    2010-01-01

    A critical assumption in cross-cultural comparative research is that the instrument measures the same construct(s) in exactly the same way across all groups (i.e., the instrument is measurement and structurally equivalent). Structural equation modeling (SEM) procedures are commonly used in testing these assumptions of multigroup equivalence.…

  9. Fluid-Structure Interaction Modeling of Intracranial Aneurysm Hemodynamics: Effects of Different Assumptions

    NASA Astrophysics Data System (ADS)

    Rajabzadeh Oghaz, Hamidreza; Damiano, Robert; Meng, Hui

    2015-11-01

    Intracranial aneurysms (IAs) are pathological outpouchings of cerebral vessels, the progression of which are mediated by complex interactions between the blood flow and vasculature. Image-based computational fluid dynamics (CFD) has been used for decades to investigate IA hemodynamics. However, the commonly adopted simplifying assumptions in CFD (e.g. rigid wall) compromise the simulation accuracy and mask the complex physics involved in IA progression and eventual rupture. Several groups have considered the wall compliance by using fluid-structure interaction (FSI) modeling. However, FSI simulation is highly sensitive to numerical assumptions (e.g. linear-elastic wall material, Newtonian fluid, initial vessel configuration, and constant pressure outlet), the effects of which are poorly understood. In this study, a comprehensive investigation of the sensitivity of FSI simulations in patient-specific IAs is investigated using a multi-stage approach with a varying level of complexity. We start with simulations incorporating several common simplifications: rigid wall, Newtonian fluid, and constant pressure at the outlets, and then we stepwise remove these simplifications until the most comprehensive FSI simulations. Hemodynamic parameters such as wall shear stress and oscillatory shear index are assessed and compared at each stage to better understand the sensitivity of in FSI simulations for IA to model assumptions. Supported by the National Institutes of Health (1R01 NS 091075-01).

  10. Developmental models for estimating ecological responses to environmental variability: structural, parametric, and experimental issues.

    PubMed

    Moore, Julia L; Remais, Justin V

    2014-03-01

    Developmental models that account for the metabolic effect of temperature variability on poikilotherms, such as degree-day models, have been widely used to study organism emergence, range and development, particularly in agricultural and vector-borne disease contexts. Though simple and easy to use, structural and parametric issues can influence the outputs of such models, often substantially. Because the underlying assumptions and limitations of these models have rarely been considered, this paper reviews the structural, parametric, and experimental issues that arise when using degree-day models, including the implications of particular structural or parametric choices, as well as assumptions that underlie commonly used models. Linear and non-linear developmental functions are compared, as are common methods used to incorporate temperature thresholds and calculate daily degree-days. Substantial differences in predicted emergence time arose when using linear versus non-linear developmental functions to model the emergence time in a model organism. The optimal method for calculating degree-days depends upon where key temperature threshold parameters fall relative to the daily minimum and maximum temperatures, as well as the shape of the daily temperature curve. No method is shown to be universally superior, though one commonly used method, the daily average method, consistently provides accurate results. The sensitivity of model projections to these methodological issues highlights the need to make structural and parametric selections based on a careful consideration of the specific biological response of the organism under study, and the specific temperature conditions of the geographic regions of interest. When degree-day model limitations are considered and model assumptions met, the models can be a powerful tool for studying temperature-dependent development.

  11. Improving Baseline Model Assumptions: Evaluating the Impacts of Typical Methodological Approaches in Watershed Models

    NASA Astrophysics Data System (ADS)

    Muenich, R. L.; Kalcic, M. M.; Teshager, A. D.; Long, C. M.; Wang, Y. C.; Scavia, D.

    2017-12-01

    Thanks to the availability of open-source software, online tutorials, and advanced software capabilities, watershed modeling has expanded its user-base and applications significantly in the past thirty years. Even complicated models like the Soil and Water Assessment Tool (SWAT) are being used and documented in hundreds of peer-reviewed publications each year, and likely more applied in practice. These models can help improve our understanding of present, past, and future conditions, or analyze important "what-if" management scenarios. However, baseline data and methods are often adopted and applied without rigorous testing. In multiple collaborative projects, we have evaluated the influence of some of these common approaches on model results. Specifically, we examined impacts of baseline data and assumptions involved in manure application, combined sewer overflows, and climate data incorporation across multiple watersheds in the Western Lake Erie Basin. In these efforts, we seek to understand the impact of using typical modeling data and assumptions, versus using improved data and enhanced assumptions on model outcomes and thus ultimately, study conclusions. We provide guidance for modelers as they adopt and apply data and models for their specific study region. While it is difficult to quantitatively assess the full uncertainty surrounding model input data and assumptions, recognizing the impacts of model input choices is important when considering actions at the both the field and watershed scales.

  12. Robustness of location estimators under t-distributions: a literature review

    NASA Astrophysics Data System (ADS)

    Sumarni, C.; Sadik, K.; Notodiputro, K. A.; Sartono, B.

    2017-03-01

    The assumption of normality is commonly used in estimation of parameters in statistical modelling, but this assumption is very sensitive to outliers. The t-distribution is more robust than the normal distribution since the t-distributions have longer tails. The robustness measures of location estimators under t-distributions are reviewed and discussed in this paper. For the purpose of illustration we use the onion yield data which includes outliers as a case study and showed that the t model produces better fit than the normal model.

  13. Old and New Ideas for Data Screening and Assumption Testing for Exploratory and Confirmatory Factor Analysis

    PubMed Central

    Flora, David B.; LaBrish, Cathy; Chalmers, R. Philip

    2011-01-01

    We provide a basic review of the data screening and assumption testing issues relevant to exploratory and confirmatory factor analysis along with practical advice for conducting analyses that are sensitive to these concerns. Historically, factor analysis was developed for explaining the relationships among many continuous test scores, which led to the expression of the common factor model as a multivariate linear regression model with observed, continuous variables serving as dependent variables, and unobserved factors as the independent, explanatory variables. Thus, we begin our paper with a review of the assumptions for the common factor model and data screening issues as they pertain to the factor analysis of continuous observed variables. In particular, we describe how principles from regression diagnostics also apply to factor analysis. Next, because modern applications of factor analysis frequently involve the analysis of the individual items from a single test or questionnaire, an important focus of this paper is the factor analysis of items. Although the traditional linear factor model is well-suited to the analysis of continuously distributed variables, commonly used item types, including Likert-type items, almost always produce dichotomous or ordered categorical variables. We describe how relationships among such items are often not well described by product-moment correlations, which has clear ramifications for the traditional linear factor analysis. An alternative, non-linear factor analysis using polychoric correlations has become more readily available to applied researchers and thus more popular. Consequently, we also review the assumptions and data-screening issues involved in this method. Throughout the paper, we demonstrate these procedures using an historic data set of nine cognitive ability variables. PMID:22403561

  14. Comparison of Factor Simplicity Indices for Dichotomous Data: DETECT R, Bentler's Simplicity Index, and the Loading Simplicity Index

    ERIC Educational Resources Information Center

    Finch, Holmes; Stage, Alan Kirk; Monahan, Patrick

    2008-01-01

    A primary assumption underlying several of the common methods for modeling item response data is unidimensionality, that is, test items tap into only one latent trait. This assumption can be assessed several ways, using nonlinear factor analysis and DETECT, a method based on the item conditional covariances. When multidimensionality is identified,…

  15. Collaboration or negotiation: two ways of interacting suggest how shared thinking develops.

    PubMed

    Mejía-Arauz, Rebeca; Rogoff, Barbara; Dayton, Andrew; Henne-Ochoa, Richard

    2018-03-09

    This paper contrasts two ways that shared thinking can be conceptualized: as negotiation, where individuals join their separate ideas, or collaboration, as people mutually engage together in a unified process, as an ensemble. We argue that these paradigms are culturally based, with the negotiation model fitting within an assumption system of separate entities-an assumption system we believe to be common in psychology and in middle-class European American society-and the collaboration model fitting within a holistic worldview that appears to be common in Indigenous-heritage communities of the Americas. We discuss cultural differences in children's interactions-as negotiation or collaboration-that suggest how these distinct paradigms develop. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Using Weighted Least Squares Regression for Obtaining Langmuir Sorption Constants

    USDA-ARS?s Scientific Manuscript database

    One of the most commonly used models for describing phosphorus (P) sorption to soils is the Langmuir model. To obtain model parameters, the Langmuir model is fit to measured sorption data using least squares regression. Least squares regression is based on several assumptions including normally dist...

  17. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE PAGES

    Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...

    2017-11-08

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  18. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Xin; Garikapati, Venu M.; You, Daehyun

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  19. A particle image velocimetry validation database in the inoor environment using a breathing thermal manakin in rotational motion

    EPA Science Inventory

    Determination of indoor exposure levels commonly involves assumptions of fully mixed ventilation conditions. In the effort to determine contaminant levels with efficiency, the nodal approach is common in modeling of the indoor environment. To quantify the transport phenomenon or ...

  20. The Combined Effects of Measurement Error and Omitting Confounders in the Single-Mediator Model

    PubMed Central

    Fritz, Matthew S.; Kenny, David A.; MacKinnon, David P.

    2016-01-01

    Mediation analysis requires a number of strong assumptions be met in order to make valid causal inferences. Failing to account for violations of these assumptions, such as not modeling measurement error or omitting a common cause of the effects in the model, can bias the parameter estimates of the mediated effect. When the independent variable is perfectly reliable, for example when participants are randomly assigned to levels of treatment, measurement error in the mediator tends to underestimate the mediated effect, while the omission of a confounding variable of the mediator to outcome relation tends to overestimate the mediated effect. Violations of these two assumptions often co-occur, however, in which case the mediated effect could be overestimated, underestimated, or even, in very rare circumstances, unbiased. In order to explore the combined effect of measurement error and omitted confounders in the same model, the impact of each violation on the single-mediator model is first examined individually. Then the combined effect of having measurement error and omitted confounders in the same model is discussed. Throughout, an empirical example is provided to illustrate the effect of violating these assumptions on the mediated effect. PMID:27739903

  1. A general consumer-resource population model

    USGS Publications Warehouse

    Lafferty, Kevin D.; DeLeo, Giulio; Briggs, Cheryl J.; Dobson, Andrew P.; Gross, Thilo; Kuris, Armand M.

    2015-01-01

    Food-web dynamics arise from predator-prey, parasite-host, and herbivore-plant interactions. Models for such interactions include up to three consumer activity states (questing, attacking, consuming) and up to four resource response states (susceptible, exposed, ingested, resistant). Articulating these states into a general model allows for dissecting, comparing, and deriving consumer-resource models. We specify this general model for 11 generic consumer strategies that group mathematically into predators, parasites, and micropredators and then derive conditions for consumer success, including a universal saturating functional response. We further show how to use this framework to create simple models with a common mathematical lineage and transparent assumptions. Underlying assumptions, missing elements, and composite parameters are revealed when classic consumer-resource models are derived from the general model.

  2. Testing the functional significance of microbial community composition.

    Treesearch

    Michael S. Strickland; Christian Lauber; Noah Fierer; Mark A. Bradford

    2009-01-01

    A critical assumption underlying terrestrial ecosystem models is that soil microbial communities, when placed in a common environment, will function in an identical manner regardless of the composition...

  3. How to reach linguistic consensus: a proof of convergence for the naming game.

    PubMed

    De Vylder, Bart; Tuyls, Karl

    2006-10-21

    In this paper we introduce a mathematical model of naming games. Naming games have been widely used within research on the origins and evolution of language. Despite the many interesting empirical results these studies have produced, most of this research lacks a formal elucidating theory. In this paper we show how a population of agents can reach linguistic consensus, i.e. learn to use one common language to communicate with one another. Our approach differs from existing formal work in two important ways: one, we relax the too strong assumption that an agent samples infinitely often during each time interval. This assumption is usually made to guarantee convergence of an empirical learning process to a deterministic dynamical system. Two, we provide a proof that under these new realistic conditions, our model converges to a common language for the entire population of agents. Finally the model is experimentally validated.

  4. Checking distributional assumptions for pharmacokinetic summary statistics based on simulations with compartmental models.

    PubMed

    Shen, Meiyu; Russek-Cohen, Estelle; Slud, Eric V

    2016-08-12

    Bioequivalence (BE) studies are an essential part of the evaluation of generic drugs. The most common in vivo BE study design is the two-period two-treatment crossover design. AUC (area under the concentration-time curve) and Cmax (maximum concentration) are obtained from the observed concentration-time profiles for each subject from each treatment under each sequence. In the BE evaluation of pharmacokinetic crossover studies, the normality of the univariate response variable, e.g. log(AUC) 1 or log(Cmax), is often assumed in the literature without much evidence. Therefore, we investigate the distributional assumption of the normality of response variables, log(AUC) and log(Cmax), by simulating concentration-time profiles from two-stage pharmacokinetic models (commonly used in pharmacokinetic research) for a wide range of pharmacokinetic parameters and measurement error structures. Our simulations show that, under reasonable distributional assumptions on the pharmacokinetic parameters, log(AUC) has heavy tails and log(Cmax) is skewed. Sensitivity analyses are conducted to investigate how the distribution of the standardized log(AUC) (or the standardized log(Cmax)) for a large number of simulated subjects deviates from normality if distributions of errors in the pharmacokinetic model for plasma concentrations deviate from normality and if the plasma concentration can be described by different compartmental models.

  5. The Hydrofacies Approach and Why ln K σ 2 <5-10 is Unlikely

    NASA Astrophysics Data System (ADS)

    Fogg, G. E.

    2004-12-01

    When heterogeneity of geologic systems is characterized in terms of hydrofacies rather than solely based on K measurements, the resulting flow and transport models typically contain not only aquifer materials but also significant volumes (10-70%) of aquitard materials. This leads to clear, heuristic rationale for the ln K σ 2 commonly exceeding 5 to 10, contradicting published data on ln K σ 2. I will explain the inconsistencies between commonly held assumptions of low (<1-2) ln K σ 2 and abundant geologic and hydrologic field data that indicate substantially larger values. The K data commonly cited in support of the low ln K σ 2 assumption have been misinterpreted because of unintentional, biased sampling. Geologic fundamentals and field data indicate that ln K σ 2 is commonly >10 and can easily exceed 20 in typical sedimentary deposits (not surficial soils) at spatial scales on the order of 101 to 102 m. Presence of large ln K σ 2 can be paramount in transport models and is often requisite for modeling observed transport phenomena such as preferential flow, extreme tailing, difficult remediation including frequent pump-and-treat failure, and significant, unanticipated mixing of groundwater ages.

  6. Robust Bayesian linear regression with application to an analysis of the CODATA values for the Planck constant

    NASA Astrophysics Data System (ADS)

    Wübbeler, Gerd; Bodnar, Olha; Elster, Clemens

    2018-02-01

    Weighted least-squares estimation is commonly applied in metrology to fit models to measurements that are accompanied with quoted uncertainties. The weights are chosen in dependence on the quoted uncertainties. However, when data and model are inconsistent in view of the quoted uncertainties, this procedure does not yield adequate results. When it can be assumed that all uncertainties ought to be rescaled by a common factor, weighted least-squares estimation may still be used, provided that a simple correction of the uncertainty obtained for the estimated model is applied. We show that these uncertainties and credible intervals are robust, as they do not rely on the assumption of a Gaussian distribution of the data. Hence, common software for weighted least-squares estimation may still safely be employed in such a case, followed by a simple modification of the uncertainties obtained by that software. We also provide means of checking the assumptions of such an approach. The Bayesian regression procedure is applied to analyze the CODATA values for the Planck constant published over the past decades in terms of three different models: a constant model, a straight line model and a spline model. Our results indicate that the CODATA values may not have yet stabilized.

  7. The Risk GP Model: the standard model of prediction in medicine.

    PubMed

    Fuller, Jonathan; Flores, Luis J

    2015-12-01

    With the ascent of modern epidemiology in the Twentieth Century came a new standard model of prediction in public health and clinical medicine. In this article, we describe the structure of the model. The standard model uses epidemiological measures-most commonly, risk measures-to predict outcomes (prognosis) and effect sizes (treatment) in a patient population that can then be transformed into probabilities for individual patients. In the first step, a risk measure in a study population is generalized or extrapolated to a target population. In the second step, the risk measure is particularized or transformed to yield probabilistic information relevant to a patient from the target population. Hence, we call the approach the Risk Generalization-Particularization (Risk GP) Model. There are serious problems at both stages, especially with the extent to which the required assumptions will hold and the extent to which we have evidence for the assumptions. Given that there are other models of prediction that use different assumptions, we should not inflexibly commit ourselves to one standard model. Instead, model pluralism should be standard in medical prediction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. The problem of the second wind turbine - a note on a common but flawed wind power estimation method

    NASA Astrophysics Data System (ADS)

    Gans, F.; Miller, L. M.; Kleidon, A.

    2012-06-01

    Several recent wind power estimates suggest that this renewable energy resource can meet all of the current and future global energy demand with little impact on the atmosphere. These estimates are calculated using observed wind speeds in combination with specifications of wind turbine size and density to quantify the extractable wind power. However, this approach neglects the effects of momentum extraction by the turbines on the atmospheric flow that would have effects outside the turbine wake. Here we show with a simple momentum balance model of the atmospheric boundary layer that this common methodology to derive wind power potentials requires unrealistically high increases in the generation of kinetic energy by the atmosphere. This increase by an order of magnitude is needed to ensure momentum conservation in the atmospheric boundary layer. In the context of this simple model, we then compare the effect of three different assumptions regarding the boundary conditions at the top of the boundary layer, with prescribed hub height velocity, momentum transport, or kinetic energy transfer into the boundary layer. We then use simulations with an atmospheric general circulation model that explicitly simulate generation of kinetic energy with momentum conservation. These simulations show that the assumption of prescribed momentum import into the atmospheric boundary layer yields the most realistic behavior of the simple model, while the assumption of prescribed hub height velocity can clearly be disregarded. We also show that the assumptions yield similar estimates for extracted wind power when less than 10% of the kinetic energy flux in the boundary layer is extracted by the turbines. We conclude that the common method significantly overestimates wind power potentials by an order of magnitude in the limit of high wind power extraction. Ultimately, environmental constraints set the upper limit on wind power potential at larger scales rather than detailed engineering specifications of wind turbine design and placement.

  9. Statistical limitations in functional neuroimaging. I. Non-inferential methods and statistical models.

    PubMed Central

    Petersson, K M; Nichols, T E; Poline, J B; Holmes, A P

    1999-01-01

    Functional neuroimaging (FNI) provides experimental access to the intact living brain making it possible to study higher cognitive functions in humans. In this review and in a companion paper in this issue, we discuss some common methods used to analyse FNI data. The emphasis in both papers is on assumptions and limitations of the methods reviewed. There are several methods available to analyse FNI data indicating that none is optimal for all purposes. In order to make optimal use of the methods available it is important to know the limits of applicability. For the interpretation of FNI results it is also important to take into account the assumptions, approximations and inherent limitations of the methods used. This paper gives a brief overview over some non-inferential descriptive methods and common statistical models used in FNI. Issues relating to the complex problem of model selection are discussed. In general, proper model selection is a necessary prerequisite for the validity of the subsequent statistical inference. The non-inferential section describes methods that, combined with inspection of parameter estimates and other simple measures, can aid in the process of model selection and verification of assumptions. The section on statistical models covers approaches to global normalization and some aspects of univariate, multivariate, and Bayesian models. Finally, approaches to functional connectivity and effective connectivity are discussed. In the companion paper we review issues related to signal detection and statistical inference. PMID:10466149

  10. Mission Command in the Age of Network-Enabled Operations: Social Network Analysis of Information Sharing and Situation Awareness

    DTIC Science & Technology

    2016-06-22

    this assumption in a large-scale, 2-week military training exercise. We conducted a social network analysis of email communications among the multi...exponential random graph models challenge the aforementioned assumption, as increased email output was associated with lower individual situation... email links were more commonly formed among members of the command staff with both similar functions and levels of situation awareness, than between

  11. ECOLOGICAL THEORY. A general consumer-resource population model.

    PubMed

    Lafferty, Kevin D; DeLeo, Giulio; Briggs, Cheryl J; Dobson, Andrew P; Gross, Thilo; Kuris, Armand M

    2015-08-21

    Food-web dynamics arise from predator-prey, parasite-host, and herbivore-plant interactions. Models for such interactions include up to three consumer activity states (questing, attacking, consuming) and up to four resource response states (susceptible, exposed, ingested, resistant). Articulating these states into a general model allows for dissecting, comparing, and deriving consumer-resource models. We specify this general model for 11 generic consumer strategies that group mathematically into predators, parasites, and micropredators and then derive conditions for consumer success, including a universal saturating functional response. We further show how to use this framework to create simple models with a common mathematical lineage and transparent assumptions. Underlying assumptions, missing elements, and composite parameters are revealed when classic consumer-resource models are derived from the general model. Copyright © 2015, American Association for the Advancement of Science.

  12. Survey of three-dimensional numerical estuarine models

    USGS Publications Warehouse

    Cheng, Ralph T.; Smith, Peter E.

    1989-01-01

    This paper surveys the existing 3-D estuarine hydrodynamic and solute transport models by a review of the commonly used assumptions and approximations, and by an examination of the methods of solution. The model formulations, methods of solution, and known applications are surveyed and summarized in tables. In conclusion, the authors present their modeling philosophy and suggest future research needs.

  13. Exploring the Robustness of a Unidimensional Item Response Theory Model with Empirically Multidimensional Data

    ERIC Educational Resources Information Center

    Anderson, Daniel; Kahn, Joshua D.; Tindal, Gerald

    2017-01-01

    Unidimensionality and local independence are two common assumptions of item response theory. The former implies that all items measure a common latent trait, while the latter implies that responses are independent, conditional on respondents' location on the latent trait. Yet, few tests are truly unidimensional. Unmodeled dimensions may result in…

  14. The cost-effectiveness of rotavirus vaccination: Comparative analyses for five European countries and transferability in Europe.

    PubMed

    Jit, Mark; Bilcke, Joke; Mangen, Marie-Josée J; Salo, Heini; Melliez, Hugues; Edmunds, W John; Yazdan, Yazdanpanah; Beutels, Philippe

    2009-10-19

    Cost-effectiveness analyses are usually not directly comparable between countries because of differences in analytical and modelling assumptions. We investigated the cost-effectiveness of rotavirus vaccination in five European Union countries (Belgium, England and Wales, Finland, France and the Netherlands) using a single model, burden of disease estimates supplied by national public health agencies and a subset of common assumptions. Under base case assumptions (vaccination with Rotarix, 3% discount rate, health care provider perspective, no herd immunity and quality of life of one caregiver affected by a rotavirus episode) and a cost-effectiveness threshold of euro30,000, vaccination is likely to be cost effective in Finland only. However, single changes to assumptions may make it cost effective in Belgium and the Netherlands. The estimated threshold price per dose for Rotarix (excluding administration costs) to be cost effective was euro41 in Belgium, euro28 in England and Wales, euro51 in Finland, euro36 in France and euro46 in the Netherlands.

  15. An Application of Unfolding and Cumulative Item Response Theory Models for Noncognitive Scaling: Examining the Assumptions and Applicability of the Generalized Graded Unfolding Model

    ERIC Educational Resources Information Center

    Sgammato, Adrienne N.

    2009-01-01

    This study examined the applicability of a relatively new unidimensional, unfolding item response theory (IRT) model called the generalized graded unfolding model (GGUM; Roberts, Donoghue, & Laughlin, 2000). A total of four scaling methods were applied. Two commonly used cumulative IRT models for polytomous data, the Partial Credit Model and…

  16. Mating behavior and reproductive output in insecticide-resistant and -susceptible strains of the maize weevil (Sitophilus zeamais)

    USDA-ARS?s Scientific Manuscript database

    Insecticide resistance is the most broadly recognized and well studied ecological problem resulting from intensive insecticide use, which also provides useful evolutionary models of newly adapted phenotypes to changing environments. Two common assumptions in such population-oriented models are the e...

  17. ACCUMULATION OF PBDE-47 IN PRIMARY CULTURES OF RAT NEOCORTICAL CELLS.

    EPA Science Inventory

    Cell culture models are often used in mechanistic studies of toxicant action. However, one area of uncertainty is the extrapolation of dose from the in vitro model to the in vivo tissue. A common assumption of in vitro studies is that media concentration is a predictive marker of...

  18. Sequencing Adventure Activities: A New Perspective.

    ERIC Educational Resources Information Center

    Bisson, Christian

    Sequencing in adventure education involves putting activities in an order appropriate to the needs of the group. Contrary to the common assumption that each adventure sequence is unique, a review of literature concerning five sequencing models reveals a certain universality. These models present sequences that move through four phases: group…

  19. A comparative study between nonlinear regression and nonparametric approaches for modelling Phalaris paradoxa seedling emergence

    USDA-ARS?s Scientific Manuscript database

    Parametric non-linear regression (PNR) techniques commonly are used to develop weed seedling emergence models. Such techniques, however, require statistical assumptions that are difficult to meet. To examine and overcome these limitations, we compared PNR with a nonparametric estimation technique. F...

  20. Diagnostic Procedures for Detecting Nonlinear Relationships between Latent Variables

    ERIC Educational Resources Information Center

    Bauer, Daniel J.; Baldasaro, Ruth E.; Gottfredson, Nisha C.

    2012-01-01

    Structural equation models are commonly used to estimate relationships between latent variables. Almost universally, the fitted models specify that these relationships are linear in form. This assumption is rarely checked empirically, largely for lack of appropriate diagnostic techniques. This article presents and evaluates two procedures that can…

  1. Applying Additive Hazards Models for Analyzing Survival in Patients with Colorectal Cancer in Fars Province, Southern Iran

    PubMed

    Madadizadeh, Farzan; Ghanbarnejad, Amin; Ghavami, Vahid; Zare Bandamiri, Mohammad; Mohammadianpanah, Mohammad

    2017-04-01

    Introduction: Colorectal cancer (CRC) is a commonly fatal cancer that ranks as third worldwide and third and the fifth in Iranian women and men, respectively. There are several methods for analyzing time to event data. Additive hazards regression models take priority over the popular Cox proportional hazards model if the absolute hazard (risk) change instead of hazard ratio is of primary concern, or a proportionality assumption is not made. Methods: This study used data gathered from medical records of 561 colorectal cancer patients who were admitted to Namazi Hospital, Shiraz, Iran, during 2005 to 2010 and followed until December 2015. The nonparametric Aalen’s additive hazards model, semiparametric Lin and Ying’s additive hazards model and Cox proportional hazards model were applied for data analysis. The proportionality assumption for the Cox model was evaluated with a test based on the Schoenfeld residuals and for test goodness of fit in additive models, Cox-Snell residual plots were used. Analyses were performed with SAS 9.2 and R3.2 software. Results: The median follow-up time was 49 months. The five-year survival rate and the mean survival time after cancer diagnosis were 59.6% and 68.1±1.4 months, respectively. Multivariate analyses using Lin and Ying’s additive model and the Cox proportional model indicated that the age of diagnosis, site of tumor, stage, and proportion of positive lymph nodes, lymphovascular invasion and type of treatment were factors affecting survival of the CRC patients. Conclusion: Additive models are suitable alternatives to the Cox proportionality model if there is interest in evaluation of absolute hazard change, or no proportionality assumption is made. Creative Commons Attribution License

  2. Testing electrostatic equilibrium in the ionosphere by detailed comparison of ground magnetic deflection and incoherent scatter radar.

    NASA Astrophysics Data System (ADS)

    Cosgrove, R. B.; Schultz, A.; Imamura, N.

    2016-12-01

    Although electrostatic equilibrium is always assumed in the ionosphere, there is no good theoretical or experimental justification for the assumption. In fact, recent theoretical investigations suggest that the electrostatic assumption may be grossly in error. If true, many commonly used modeling methods are placed in doubt. For example, the accepted method for calculating ionospheric conductance??field line integration??may be invalid. In this talk we briefly outline the theoretical research that places the electrostatic assumption in doubt, and then describe how comparison of ground magnetic field data with incoherent scatter radar (ISR) data can be used to test the electrostatic assumption in the ionosphere. We describe a recent experiment conducted for the purpose, where an array of magnetometers was temporalily installed under the Poker Flat AMISR.

  3. Through a glass, darkly—comparing VDDT and FVS

    Treesearch

    Donald C.E. Robinson; Sarah J. Beukema

    2012-01-01

    Land managers commonly use FVS and VDDT as planning aids. Although complementary, the models differ in their approach to projection, spatial and temporal resolution, simulation units and required input. When both are used, comparison of the model projections helps to identify differences in the assumptions of the two models and hopefully will result in more consistent...

  4. Is There a Critical Distance for Fickian Transport? - a Statistical Approach to Sub-Fickian Transport Modelling in Porous Media

    NASA Astrophysics Data System (ADS)

    Most, S.; Nowak, W.; Bijeljic, B.

    2014-12-01

    Transport processes in porous media are frequently simulated as particle movement. This process can be formulated as a stochastic process of particle position increments. At the pore scale, the geometry and micro-heterogeneities prohibit the commonly made assumption of independent and normally distributed increments to represent dispersion. Many recent particle methods seek to loosen this assumption. Recent experimental data suggest that we have not yet reached the end of the need to generalize, because particle increments show statistical dependency beyond linear correlation and over many time steps. The goal of this work is to better understand the validity regions of commonly made assumptions. We are investigating after what transport distances can we observe: A statistical dependence between increments, that can be modelled as an order-k Markov process, boils down to order 1. This would be the Markovian distance for the process, where the validity of yet-unexplored non-Gaussian-but-Markovian random walks would start. A bivariate statistical dependence that simplifies to a multi-Gaussian dependence based on simple linear correlation (validity of correlated PTRW). Complete absence of statistical dependence (validity of classical PTRW/CTRW). The approach is to derive a statistical model for pore-scale transport from a powerful experimental data set via copula analysis. The model is formulated as a non-Gaussian, mutually dependent Markov process of higher order, which allows us to investigate the validity ranges of simpler models.

  5. Economic evaluation in chronic pain: a systematic review and de novo flexible economic model.

    PubMed

    Sullivan, W; Hirst, M; Beard, S; Gladwell, D; Fagnani, F; López Bastida, J; Phillips, C; Dunlop, W C N

    2016-07-01

    There is unmet need in patients suffering from chronic pain, yet innovation may be impeded by the difficulty of justifying economic value in a field beset by data limitations and methodological variability. A systematic review was conducted to identify and summarise the key areas of variability and limitations in modelling approaches in the economic evaluation of treatments for chronic pain. The results of the literature review were then used to support the development of a fully flexible open-source economic model structure, designed to test structural and data assumptions and act as a reference for future modelling practice. The key model design themes identified from the systematic review included: time horizon; titration and stabilisation; number of treatment lines; choice/ordering of treatment; and the impact of parameter uncertainty (given reliance on expert opinion). Exploratory analyses using the model to compare a hypothetical novel therapy versus morphine as first-line treatments showed cost-effectiveness results to be sensitive to structural and data assumptions. Assumptions about the treatment pathway and choice of time horizon were key model drivers. Our results suggest structural model design and data assumptions may have driven previous cost-effectiveness results and ultimately decisions based on economic value. We therefore conclude that it is vital that future economic models in chronic pain are designed to be fully transparent and hope our open-source code is useful in order to aspire to a common approach to modelling pain that includes robust sensitivity analyses to test structural and parameter uncertainty.

  6. Bio-Optics of the Chesapeake Bay from Measurements and Radiative Transfer Calculations

    NASA Technical Reports Server (NTRS)

    Tzortziou, Maria; Herman, Jay R.; Gallegos, Charles L.; Neale, Patrick J.; Subramaniam, Ajit; Harding, Lawrence W., Jr.; Ahmad, Ziauddin

    2005-01-01

    We combined detailed bio-optical measurements and radiative transfer (RT) modeling to perform an optical closure experiment for optically complex and biologically productive Chesapeake Bay waters. We used this experiment to evaluate certain assumptions commonly used when modeling bio-optical processes, and to investigate the relative importance of several optical characteristics needed to accurately model and interpret remote sensing ocean-color observations in these Case 2 waters. Direct measurements were made of the magnitude, variability, and spectral characteristics of backscattering and absorption that are critical for accurate parameterizations in satellite bio-optical algorithms and underwater RT simulations. We found that the ratio of backscattering to total scattering in the mid-mesohaline Chesapeake Bay varied considerably depending on particulate loading, distance from land, and mixing processes, and had an average value of 0.0128 at 530 nm. Incorporating information on the magnitude, variability, and spectral characteristics of particulate backscattering into the RT model, rather than using a volume scattering function commonly assumed for turbid waters, was critical to obtaining agreement between RT calculations and measured radiometric quantities. In situ measurements of absorption coefficients need to be corrected for systematic overestimation due to scattering errors, and this correction commonly employs the assumption that absorption by particulate matter at near infrared wavelengths is zero.

  7. MODELING SNAKE MICROHABITAT FROM RADIOTELEMETRY STUDIES USING POLYTOMOUS LOGISTIC REGRESSION

    EPA Science Inventory

    Multivariate analysis of snake microhabitat has historically used techniques that were derived under assumptions of normality and common covariance structure (e.g., discriminant function analysis, MANOVA). In this study, polytomous logistic regression (PLR which does not require ...

  8. TIME AND CONCENTRATION DEPENDENT ACCUMULATION OF [3H]-DELTAMETHRIN IN XENOPUS LAEVIS OOCYTES.

    EPA Science Inventory

    Cell culture models are often used in mechanistic studies of toxicant action. However, one area of uncertainty is the extrapolation of dose from the in vitro model to the in vivo tissue. A common assumption of in vitro studies is that media concentration is a predictive marker of...

  9. Assessing Measurement Equivalence in Ordered-Categorical Data

    ERIC Educational Resources Information Center

    Elosua, Paula

    2011-01-01

    Assessing measurement equivalence in the framework of the common factor linear models (CFL) is known as factorial invariance. This methodology is used to evaluate the equivalence among the parameters of a measurement model among different groups. However, when dichotomous, Likert, or ordered responses are used, one of the assumptions of the CFL is…

  10. Going with the Flow: Challenging Students to Make Assumptions

    ERIC Educational Resources Information Center

    Felton, Mathew D.; Anhalt, Cynthia O.; Cortez, Ricardo

    2015-01-01

    Many current and future teachers have little experience with modeling and how to integrate it into their teaching. However, with the introduction of the Common Core State Standards for Mathematics (CCSSM) and its emphasis on mathematical modeling in all grades (CCSSI 2010), this integration has become paramount. Therefore, middle-grades teachers…

  11. Introducing Multidimensional Item Response Modeling in Health Behavior and Health Education Research

    ERIC Educational Resources Information Center

    Allen, Diane D.; Wilson, Mark

    2006-01-01

    When measuring participant-reported attitudes and outcomes in the behavioral sciences, there are many instances when the common measurement assumption of unidimensionality does not hold. In these cases, the application of a multidimensional measurement model is both technically appropriate and potentially advantageous in substance. In this paper,…

  12. Governance Failure in Social Enterprise

    ERIC Educational Resources Information Center

    Low, Chris; Chinnock, Chris

    2008-01-01

    This article aims to evaluate the effectiveness of the participative, democratic model of governance commonly found within social enterprises. This model has its origins in the broader not-for-profit sector where it is widely adopted. A core assumption of this governance form is that it ensures that the organisation will take a range of views into…

  13. Copula Models for Sociology: Measures of Dependence and Probabilities for Joint Distributions

    ERIC Educational Resources Information Center

    Vuolo, Mike

    2017-01-01

    Often in sociology, researchers are confronted with nonnormal variables whose joint distribution they wish to explore. Yet, assumptions of common measures of dependence can fail or estimating such dependence is computationally intensive. This article presents the copula method for modeling the joint distribution of two random variables, including…

  14. Sample Size Determination for Regression Models Using Monte Carlo Methods in R

    ERIC Educational Resources Information Center

    Beaujean, A. Alexander

    2014-01-01

    A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…

  15. A Bottom-Up Approach to Understanding Protein Layer Formation at Solid-Liquid Interfaces

    PubMed Central

    Kastantin, Mark; Langdon, Blake B.; Schwartz, Daniel K.

    2014-01-01

    A common goal across different fields (e.g. separations, biosensors, biomaterials, pharmaceuticals) is to understand how protein behavior at solid-liquid interfaces is affected by environmental conditions. Temperature, pH, ionic strength, and the chemical and physical properties of the solid surface, among many factors, can control microscopic protein dynamics (e.g. adsorption, desorption, diffusion, aggregation) that contribute to macroscopic properties like time-dependent total protein surface coverage and protein structure. These relationships are typically studied through a top-down approach in which macroscopic observations are explained using analytical models that are based upon reasonable, but not universally true, simplifying assumptions about microscopic protein dynamics. Conclusions connecting microscopic dynamics to environmental factors can be heavily biased by potentially incorrect assumptions. In contrast, more complicated models avoid several of the common assumptions but require many parameters that have overlapping effects on predictions of macroscopic, average protein properties. Consequently, these models are poorly suited for the top-down approach. Because the sophistication incorporated into these models may ultimately prove essential to understanding interfacial protein behavior, this article proposes a bottom-up approach in which direct observations of microscopic protein dynamics specify parameters in complicated models, which then generate macroscopic predictions to compare with experiment. In this framework, single-molecule tracking has proven capable of making direct measurements of microscopic protein dynamics, but must be complemented by modeling to combine and extrapolate many independent microscopic observations to the macro-scale. The bottom-up approach is expected to better connect environmental factors to macroscopic protein behavior, thereby guiding rational choices that promote desirable protein behaviors. PMID:24484895

  16. An introduction to multidimensional measurement using Rasch models.

    PubMed

    Briggs, Derek C; Wilson, Mark

    2003-01-01

    The act of constructing a measure requires a number of important assumptions. Principle among these assumptions is that the construct is unidimensional. In practice there are many instances when the assumption of unidimensionality does not hold, and where the application of a multidimensional measurement model is both technically appropriate and substantively advantageous. In this paper we illustrate the usefulness of a multidimensional approach to measurement with the Multidimensional Random Coefficient Multinomial Logit (MRCML) model, an extension of the unidimensional Rasch model. An empirical example is taken from a collection of embedded assessments administered to 541 students enrolled in middle school science classes with a hands-on science curriculum. Student achievement on these assessments are multidimensional in nature, but can also be treated as consecutive unidimensional estimates, or as is most common, as a composite unidimensional estimate. Structural parameters are estimated for each model using ConQuest, and model fit is compared. Student achievement in science is also compared across models. The multidimensional approach has the best fit to the data, and provides more reliable estimates of student achievement than under the consecutive unidimensional approach. Finally, at an interpretational level, the multidimensional approach may well provide richer information to the classroom teacher about the nature of student achievement.

  17. Opening new institutional spaces for grappling with uncertainty: A constructivist perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duncan, Ronlyn, E-mail: Ronlyn.Duncan@lincoln.ac.nz

    In the context of an increasing reliance on predictive computer simulation models to calculate potential project impacts, it has become common practice in impact assessment (IA) to call on proponents to disclose uncertainties in assumptions and conclusions assembled in support of a development project. Understandably, it is assumed that such disclosures lead to greater scrutiny and better policy decisions. This paper questions this assumption. Drawing on constructivist theories of knowledge and an analysis of the role of narratives in managing uncertainty, I argue that the disclosure of uncertainty can obscure as much as it reveals about the impacts of amore » development project. It is proposed that the opening up of institutional spaces that can facilitate the negotiation and deliberation of foundational assumptions and parameters that feed into predictive models could engender greater legitimacy and credibility for IA outcomes. - Highlights: Black-Right-Pointing-Pointer A reliance on supposedly objective disclosure is unreliable in the predictive model context in which IA is now embedded. Black-Right-Pointing-Pointer A reliance on disclosure runs the risk of reductionism and leaves unexamined the social-interactive aspects of uncertainty. Black-Right-Pointing-Pointer Opening new institutional spaces could facilitate deliberation on foundational predictive model assumptions.« less

  18. A Multi-state Model for Designing Clinical Trials for Testing Overall Survival Allowing for Crossover after Progression

    PubMed Central

    Xia, Fang; George, Stephen L.; Wang, Xiaofei

    2015-01-01

    In designing a clinical trial for comparing two or more treatments with respect to overall survival (OS), a proportional hazards assumption is commonly made. However, in many cancer clinical trials, patients pass through various disease states prior to death and because of this may receive treatments other than originally assigned. For example, patients may crossover from the control treatment to the experimental treatment at progression. Even without crossover, the survival pattern after progression may be very different than the pattern prior to progression. The proportional hazards assumption will not hold in these situations and the design power calculated on this assumption will not be correct. In this paper we describe a simple and intuitive multi-state model allowing for progression, death before progression, post-progression survival and crossover after progression and apply this model to the design of clinical trials for comparing the OS of two treatments. For given values of the parameters of the multi-state model, we simulate the required number of deaths to achieve a specified power and the distribution of time required to achieve the requisite number of deaths. The results may be quite different from those derived using the usual PH assumption. PMID:27239255

  19. Economic production quantity model for items with continuous quality characteristic, rework and reject

    NASA Astrophysics Data System (ADS)

    Tsou, Jia-Chi; Hejazi, Seyed Reza; Rasti Barzoki, Morteza

    2012-12-01

    The economic production quantity (EPQ) model is a well-known and commonly used inventory control technique. However, the model is built on an unrealistic assumption that all the produced items need to be of perfect quality. Having relaxed this assumption, some researchers have studied the effects of the imperfect products on the inventory control techniques. This article, thus, attempts to develop an EPQ model with continuous quality characteristic and rework. To this end, this study assumes that a produced item follows a general distribution pattern, with its quality being perfect, imperfect or defective. The analysis of the model developed indicates that there is an optimal lot size, which generates minimum total cost. Moreover, the results show that the optimal lot size of the model equals that of the classical EPQ model in case imperfect quality percentage is zero or even close to zero.

  20. ICT Is Not Participation Is Not Democracy - eParticipation Development Models Revisited

    NASA Astrophysics Data System (ADS)

    Grönlund, Åke

    There exist several models to describe “progress” in eParticipation. Models are typically ladder type and share two assumptions; progress is equalled with more sophisticated use of technology, and direct democracy is seen as the most advanced democracy model. None of the assumptions are true, considering democratic theory, and neither is fruitful as the simplification disturbs analysis and hence obscures actual progress made. The models convey a false impression of progress, but neither the goal, nor the path or the stakeholders driving the development are clearly understood, presented or evidenced. This paper analyses commonly used models based on democratic theory and eParticipation practice, and concludes that all are biased and fail to distinguish between the three dimensions an eParticipation progress model must include; relevance to democracy by any definition, applicability to different processes, (capacity building as well as decision making), and measuring different levels of participation without direct democracy bias.

  1. Links between causal effects and causal association for surrogacy evaluation in a gaussian setting.

    PubMed

    Conlon, Anna; Taylor, Jeremy; Li, Yun; Diaz-Ordaz, Karla; Elliott, Michael

    2017-11-30

    Two paradigms for the evaluation of surrogate markers in randomized clinical trials have been proposed: the causal effects paradigm and the causal association paradigm. Each of these paradigms rely on assumptions that must be made to proceed with estimation and to validate a candidate surrogate marker (S) for the true outcome of interest (T). We consider the setting in which S and T are Gaussian and are generated from structural models that include an unobserved confounder. Under the assumed structural models, we relate the quantities used to evaluate surrogacy within both the causal effects and causal association frameworks. We review some of the common assumptions made to aid in estimating these quantities and show that assumptions made within one framework can imply strong assumptions within the alternative framework. We demonstrate that there is a similarity, but not exact correspondence between the quantities used to evaluate surrogacy within each framework, and show that the conditions for identifiability of the surrogacy parameters are different from the conditions, which lead to a correspondence of these quantities. Copyright © 2017 John Wiley & Sons, Ltd.

  2. The Wally plot approach to assess the calibration of clinical prediction models.

    PubMed

    Blanche, Paul; Gerds, Thomas A; Ekstrøm, Claus T

    2017-12-06

    A prediction model is calibrated if, roughly, for any percentage x we can expect that x subjects out of 100 experience the event among all subjects that have a predicted risk of x%. Typically, the calibration assumption is assessed graphically but in practice it is often challenging to judge whether a "disappointing" calibration plot is the consequence of a departure from the calibration assumption, or alternatively just "bad luck" due to sampling variability. We propose a graphical approach which enables the visualization of how much a calibration plot agrees with the calibration assumption to address this issue. The approach is mainly based on the idea of generating new plots which mimic the available data under the calibration assumption. The method handles the common non-trivial situations in which the data contain censored observations and occurrences of competing events. This is done by building on ideas from constrained non-parametric maximum likelihood estimation methods. Two examples from large cohort data illustrate our proposal. The 'wally' R package is provided to make the methodology easily usable.

  3. Causal Models for Mediation Analysis: An Introduction to Structural Mean Models.

    PubMed

    Zheng, Cheng; Atkins, David C; Zhou, Xiao-Hua; Rhew, Isaac C

    2015-01-01

    Mediation analyses are critical to understanding why behavioral interventions work. To yield a causal interpretation, common mediation approaches must make an assumption of "sequential ignorability." The current article describes an alternative approach to causal mediation called structural mean models (SMMs). A specific SMM called a rank-preserving model (RPM) is introduced in the context of an applied example. Particular attention is given to the assumptions of both approaches to mediation. Applying both mediation approaches to the college student drinking data yield notable differences in the magnitude of effects. Simulated examples reveal instances in which the traditional approach can yield strongly biased results, whereas the RPM approach remains unbiased in these cases. At the same time, the RPM approach has its own assumptions that must be met for correct inference, such as the existence of a covariate that strongly moderates the effect of the intervention on the mediator and no unmeasured confounders that also serve as a moderator of the effect of the intervention or the mediator on the outcome. The RPM approach to mediation offers an alternative way to perform mediation analysis when there may be unmeasured confounders.

  4. Classical Causal Models for Bell and Kochen-Specker Inequality Violations Require Fine-Tuning

    NASA Astrophysics Data System (ADS)

    Cavalcanti, Eric G.

    2018-04-01

    Nonlocality and contextuality are at the root of conceptual puzzles in quantum mechanics, and they are key resources for quantum advantage in information-processing tasks. Bell nonlocality is best understood as the incompatibility between quantum correlations and the classical theory of causality, applied to relativistic causal structure. Contextuality, on the other hand, is on a more controversial foundation. In this work, I provide a common conceptual ground between nonlocality and contextuality as violations of classical causality. First, I show that Bell inequalities can be derived solely from the assumptions of no signaling and no fine-tuning of the causal model. This removes two extra assumptions from a recent result from Wood and Spekkens and, remarkably, does not require any assumption related to independence of measurement settings—unlike all other derivations of Bell inequalities. I then introduce a formalism to represent contextuality scenarios within causal models and show that all classical causal models for violations of a Kochen-Specker inequality require fine-tuning. Thus, the quantum violation of classical causality goes beyond the case of spacelike-separated systems and already manifests in scenarios involving single systems.

  5. Uncertainty analysis of multi-rate kinetics of uranium desorption from sediments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiaoying; Liu, Chongxuan; Hu, Bill X.

    2014-01-01

    A multi-rate expression for uranyl [U(VI)] surface complexation reactions has been proposed to describe diffusion-limited U(VI) sorption/desorption in heterogeneous subsurface sediments. An important assumption in the rate expression is that its rate constants follow a certain type probability distribution. In this paper, a Bayes-based, Differential Evolution Markov Chain method was used to assess the distribution assumption and to analyze parameter and model structure uncertainties. U(VI) desorption from a contaminated sediment at the US Hanford 300 Area, Washington was used as an example for detail analysis. The results indicated that: 1) the rate constants in the multi-rate expression contain uneven uncertaintiesmore » with slower rate constants having relative larger uncertainties; 2) the lognormal distribution is an effective assumption for the rate constants in the multi-rate model to simualte U(VI) desorption; 3) however, long-term prediction and its uncertainty may be significantly biased by the lognormal assumption for the smaller rate constants; and 4) both parameter and model structure uncertainties can affect the extrapolation of the multi-rate model with a larger uncertainty from the model structure. The results provide important insights into the factors contributing to the uncertainties of the multi-rate expression commonly used to describe the diffusion or mixing-limited sorption/desorption of both organic and inorganic contaminants in subsurface sediments.« less

  6. Valuation of financial models with non-linear state spaces

    NASA Astrophysics Data System (ADS)

    Webber, Nick

    2001-02-01

    A common assumption in valuation models for derivative securities is that the underlying state variables take values in a linear state space. We discuss numerical implementation issues in an interest rate model with a simple non-linear state space, formulating and comparing Monte Carlo, finite difference and lattice numerical solution methods. We conclude that, at least in low dimensional spaces, non-linear interest rate models may be viable.

  7. On the Kubo-Greenwood model for electron conductivity

    NASA Astrophysics Data System (ADS)

    Dufty, James; Wrighton, Jeffrey; Luo, Kai; Trickey, S. B.

    2018-02-01

    Currently, the most common method to calculate transport properties for materials under extreme conditions is based on the phenomenological Kubo-Greenwood method. The results of an inquiry into the justification and context of that model are summarized here. Specifically, the basis for its connection to equilibrium DFT and the assumption of static ions are discussed briefly.

  8. A Common Capacity Limitation for Response and Item Selection in Working Memory

    ERIC Educational Resources Information Center

    Janczyk, Markus

    2017-01-01

    Successful completion of any cognitive task requires selecting a particular action and the object the action is applied to. Oberauer (2009) suggested a working memory (WM) model comprising a declarative and a procedural part with analogous structures. One important assumption of this model is that both parts work independently of each other, and…

  9. Coherence Threshold and the Continuity of Processing: The RI-Val Model of Comprehension

    ERIC Educational Resources Information Center

    O'Brien, Edward J.; Cook, Anne E.

    2016-01-01

    Common to all models of reading comprehension is the assumption that a reader's level of comprehension is heavily influenced by their standards of coherence (van den Broek, Risden, & Husbye-Hartman, 1995). Our discussion focuses on a subcomponent of the readers' standards of coherence: the coherence threshold. We situate this discussion within…

  10. Causality and headache triggers

    PubMed Central

    Turner, Dana P.; Smitherman, Todd A.; Martin, Vincent T.; Penzien, Donald B.; Houle, Timothy T.

    2013-01-01

    Objective The objective of this study was to explore the conditions necessary to assign causal status to headache triggers. Background The term “headache trigger” is commonly used to label any stimulus that is assumed to cause headaches. However, the assumptions required for determining if a given stimulus in fact has a causal-type relationship in eliciting headaches have not been explicated. Methods A synthesis and application of Rubin’s Causal Model is applied to the context of headache causes. From this application the conditions necessary to infer that one event (trigger) causes another (headache) are outlined using basic assumptions and examples from relevant literature. Results Although many conditions must be satisfied for a causal attribution, three basic assumptions are identified for determining causality in headache triggers: 1) constancy of the sufferer; 2) constancy of the trigger effect; and 3) constancy of the trigger presentation. A valid evaluation of a potential trigger’s effect can only be undertaken once these three basic assumptions are satisfied during formal or informal studies of headache triggers. Conclusions Evaluating these assumptions is extremely difficult or infeasible in clinical practice, and satisfying them during natural experimentation is unlikely. Researchers, practitioners, and headache sufferers are encouraged to avoid natural experimentation to determine the causal effects of headache triggers. Instead, formal experimental designs or retrospective diary studies using advanced statistical modeling techniques provide the best approaches to satisfy the required assumptions and inform causal statements about headache triggers. PMID:23534872

  11. Local influence for generalized linear models with missing covariates.

    PubMed

    Shi, Xiaoyan; Zhu, Hongtu; Ibrahim, Joseph G

    2009-12-01

    In the analysis of missing data, sensitivity analyses are commonly used to check the sensitivity of the parameters of interest with respect to the missing data mechanism and other distributional and modeling assumptions. In this article, we formally develop a general local influence method to carry out sensitivity analyses of minor perturbations to generalized linear models in the presence of missing covariate data. We examine two types of perturbation schemes (the single-case and global perturbation schemes) for perturbing various assumptions in this setting. We show that the metric tensor of a perturbation manifold provides useful information for selecting an appropriate perturbation. We also develop several local influence measures to identify influential points and test model misspecification. Simulation studies are conducted to evaluate our methods, and real datasets are analyzed to illustrate the use of our local influence measures.

  12. Trends in Mediation Analysis in Nursing Research: Improving Current Practice.

    PubMed

    Hertzog, Melody

    2018-06-01

    The purpose of this study was to describe common approaches used by nursing researchers to test mediation models and evaluate them within the context of current methodological advances. MEDLINE was used to locate studies testing a mediation model and published from 2004 to 2015 in nursing journals. Design (experimental/correlation, cross-sectional/longitudinal, model complexity) and analysis (method, inclusion of test of mediated effect, violations/discussion of assumptions, sample size/power) characteristics were coded for 456 studies. General trends were identified using descriptive statistics. Consistent with findings of reviews in other disciplines, evidence was found that nursing researchers may not be aware of the strong assumptions and serious limitations of their analyses. Suggestions for strengthening the rigor of such studies and an overview of current methods for testing more complex models, including longitudinal mediation processes, are presented.

  13. Analysis of indentation creep

    Treesearch

    Don S. Stone; Joseph E. Jakes; Jonathan Puthoff; Abdelmageed A. Elmustafa

    2010-01-01

    Finite element analysis is used to simulate cone indentation creep in materials across a wide range of hardness, strain rate sensitivity, and work-hardening exponent. Modeling reveals that the commonly held assumption of the hardness strain rate sensitivity (mΗ) equaling the flow stress strain rate sensitivity (mσ...

  14. Analyzing recurrent events when the history of previous episodes is unknown or not taken into account: proceed with caution.

    PubMed

    Navarro, Albert; Casanovas, Georgina; Alvarado, Sergio; Moriña, David

    Researchers in public health are often interested in examining the effect of several exposures on the incidence of a recurrent event. The aim of the present study is to assess how well the common-baseline hazard models perform to estimate the effect of multiple exposures on the hazard of presenting an episode of a recurrent event, in presence of event dependence and when the history of prior-episodes is unknown or is not taken into account. Through a comprehensive simulation study, using specific-baseline hazard models as the reference, we evaluate the performance of common-baseline hazard models by means of several criteria: bias, mean squared error, coverage, confidence intervals mean length and compliance with the assumption of proportional hazards. Results indicate that the bias worsen as event dependence increases, leading to a considerable overestimation of the exposure effect; coverage levels and compliance with the proportional hazards assumption are low or extremely low, worsening with increasing event dependence, effects to be estimated, and sample sizes. Common-baseline hazard models cannot be recommended when we analyse recurrent events in the presence of event dependence. It is important to have access to the history of prior-episodes per subject, it can permit to obtain better estimations of the effects of the exposures. Copyright © 2016 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.

  15. Identifying fMRI Model Violations with Lagrange Multiplier Tests

    PubMed Central

    Cassidy, Ben; Long, Christopher J; Rae, Caroline; Solo, Victor

    2013-01-01

    The standard modeling framework in Functional Magnetic Resonance Imaging (fMRI) is predicated on assumptions of linearity, time invariance and stationarity. These assumptions are rarely checked because doing so requires specialised software, although failure to do so can lead to bias and mistaken inference. Identifying model violations is an essential but largely neglected step in standard fMRI data analysis. Using Lagrange Multiplier testing methods we have developed simple and efficient procedures for detecting model violations such as non-linearity, non-stationarity and validity of the common Double Gamma specification for hemodynamic response. These procedures are computationally cheap and can easily be added to a conventional analysis. The test statistic is calculated at each voxel and displayed as a spatial anomaly map which shows regions where a model is violated. The methodology is illustrated with a large number of real data examples. PMID:22542665

  16. Assessing Omitted Confounder Bias in Multilevel Mediation Models.

    PubMed

    Tofighi, Davood; Kelley, Ken

    2016-01-01

    To draw valid inference about an indirect effect in a mediation model, there must be no omitted confounders. No omitted confounders means that there are no common causes of hypothesized causal relationships. When the no-omitted-confounder assumption is violated, inference about indirect effects can be severely biased and the results potentially misleading. Despite the increasing attention to address confounder bias in single-level mediation, this topic has received little attention in the growing area of multilevel mediation analysis. A formidable challenge is that the no-omitted-confounder assumption is untestable. To address this challenge, we first analytically examined the biasing effects of potential violations of this critical assumption in a two-level mediation model with random intercepts and slopes, in which all the variables are measured at Level 1. Our analytic results show that omitting a Level 1 confounder can yield misleading results about key quantities of interest, such as Level 1 and Level 2 indirect effects. Second, we proposed a sensitivity analysis technique to assess the extent to which potential violation of the no-omitted-confounder assumption might invalidate or alter the conclusions about the indirect effects observed. We illustrated the methods using an empirical study and provided computer code so that researchers can implement the methods discussed.

  17. Anchor Selection Strategies for DIF Analysis: Review, Assessment, and New Approaches

    ERIC Educational Resources Information Center

    Kopf, Julia; Zeileis, Achim; Strobl, Carolin

    2015-01-01

    Differential item functioning (DIF) indicates the violation of the invariance assumption, for instance, in models based on item response theory (IRT). For item-wise DIF analysis using IRT, a common metric for the item parameters of the groups that are to be compared (e.g., for the reference and the focal group) is necessary. In the Rasch model,…

  18. An improved canopy wind model for predicting wind adjustment factors and wildland fire behavior

    Treesearch

    W. J. Massman; J. M. Forthofer; M. A. Finney

    2017-01-01

    The ability to rapidly estimate wind speed beneath a forest canopy or near the ground surface in any vegetation is critical to practical wildland fire behavior models. The common metric of this wind speed is the "mid-flame" wind speed, UMF. However, the existing approach for estimating UMF has some significant shortcomings. These include the assumptions that...

  19. Comparing models of change to estimate the mediated effect in the pretest-posttest control group design

    PubMed Central

    Valente, Matthew J.; MacKinnon, David P.

    2017-01-01

    Models to assess mediation in the pretest-posttest control group design are understudied in the behavioral sciences even though it is the design of choice for evaluating experimental manipulations. The paper provides analytical comparisons of the four most commonly used models used to estimate the mediated effect in this design: Analysis of Covariance (ANCOVA), difference score, residualized change score, and cross-sectional model. Each of these models are fitted using a Latent Change Score specification and a simulation study assessed bias, Type I error, power, and confidence interval coverage of the four models. All but the ANCOVA model make stringent assumptions about the stability and cross-lagged relations of the mediator and outcome that may not be plausible in real-world applications. When these assumptions do not hold, Type I error and statistical power results suggest that only the ANCOVA model has good performance. The four models are applied to an empirical example. PMID:28845097

  20. Comparing models of change to estimate the mediated effect in the pretest-posttest control group design.

    PubMed

    Valente, Matthew J; MacKinnon, David P

    2017-01-01

    Models to assess mediation in the pretest-posttest control group design are understudied in the behavioral sciences even though it is the design of choice for evaluating experimental manipulations. The paper provides analytical comparisons of the four most commonly used models used to estimate the mediated effect in this design: Analysis of Covariance (ANCOVA), difference score, residualized change score, and cross-sectional model. Each of these models are fitted using a Latent Change Score specification and a simulation study assessed bias, Type I error, power, and confidence interval coverage of the four models. All but the ANCOVA model make stringent assumptions about the stability and cross-lagged relations of the mediator and outcome that may not be plausible in real-world applications. When these assumptions do not hold, Type I error and statistical power results suggest that only the ANCOVA model has good performance. The four models are applied to an empirical example.

  1. The Devil and Daniel's Spreadsheet

    ERIC Educational Resources Information Center

    Burke, Maurice J.

    2012-01-01

    "When making mathematical models, technology is valuable for varying assumptions, exploring consequences, and comparing predictions with data," notes the Common Core State Standards Initiative (2010, p. 72). This exploration of the recursive process in the Devil and Daniel Webster problem reveals that the symbolic spreadsheet fits this bill.…

  2. Pore Formation During Solidification of Aluminum: Reconciliation of Experimental Observations, Modeling Assumptions, and Classical Nucleation Theory

    NASA Astrophysics Data System (ADS)

    Yousefian, Pedram; Tiryakioğlu, Murat

    2018-02-01

    An in-depth discussion of pore formation is presented in this paper by first reinterpreting in situ observations reported in the literature as well as assumptions commonly made to model pore formation in aluminum castings. The physics of pore formation is reviewed through theoretical fracture pressure calculations based on classical nucleation theory for homogeneous and heterogeneous nucleation, with and without dissolved gas, i.e., hydrogen. Based on the fracture pressure for aluminum, critical pore size and the corresponding probability of vacancies clustering to form that size have been calculated using thermodynamic data reported in the literature. Calculations show that it is impossible for a pore to nucleate either homogeneously or heterogeneously in aluminum, even with dissolved hydrogen. The formation of pores in aluminum castings can only be explained by inflation of entrained surface oxide films (bifilms) under reduced pressure and/or with dissolved gas, which involves only growth, avoiding any nucleation problem. This mechanism is consistent with the reinterpretations of in situ observations as well as the assumptions made in the literature to model pore formation.

  3. The effect of signal variability on the histograms of anthropomorphic channel outputs: factors resulting in non-normally distributed data

    NASA Astrophysics Data System (ADS)

    Elshahaby, Fatma E. A.; Ghaly, Michael; Jha, Abhinav K.; Frey, Eric C.

    2015-03-01

    Model Observers are widely used in medical imaging for the optimization and evaluation of instrumentation, acquisition parameters and image reconstruction and processing methods. The channelized Hotelling observer (CHO) is a commonly used model observer in nuclear medicine and has seen increasing use in other modalities. An anthropmorphic CHO consists of a set of channels that model some aspects of the human visual system and the Hotelling Observer, which is the optimal linear discriminant. The optimality of the CHO is based on the assumption that the channel outputs for data with and without the signal present have a multivariate normal distribution with equal class covariance matrices. The channel outputs result from the dot product of channel templates with input images and are thus the sum of a large number of random variables. The central limit theorem is thus often used to justify the assumption that the channel outputs are normally distributed. In this work, we aim to examine this assumption for realistically simulated nuclear medicine images when various types of signal variability are present.

  4. Maxwell and the normal distribution: A colored story of probability, independence, and tendency toward equilibrium

    NASA Astrophysics Data System (ADS)

    Gyenis, Balázs

    2017-02-01

    We investigate Maxwell's attempt to justify the mathematical assumptions behind his 1860 Proposition IV according to which the velocity components of colliding particles follow the normal distribution. Contrary to the commonly held view we find that his molecular collision model plays a crucial role in reaching this conclusion, and that his model assumptions also permit inference to equalization of mean kinetic energies (temperatures), which is what he intended to prove in his discredited and widely ignored Proposition VI. If we take a charitable reading of his own proof of Proposition VI then it was Maxwell, and not Boltzmann, who gave the first proof of a tendency towards equilibrium, a sort of H-theorem. We also call attention to a potential conflation of notions of probabilistic and value independence in relevant prior works of his contemporaries and of his own, and argue that this conflation might have impacted his adoption of the suspect independence assumption of Proposition IV.

  5. Robustness of statistical tests for multiplicative terms in the additive main effects and multiplicative interaction model for cultivar trials.

    PubMed

    Piepho, H P

    1995-03-01

    The additive main effects multiplicative interaction model is frequently used in the analysis of multilocation trials. In the analysis of such data it is of interest to decide how many of the multiplicative interaction terms are significant. Several tests for this task are available, all of which assume that errors are normally distributed with a common variance. This paper investigates the robustness of several tests (Gollob, F GH1, FGH2, FR)to departures from these assumptions. It is concluded that, because of its better robustness, the F Rtest is preferable. If the other tests are to be used, preliminary tests for the validity of assumptions should be performed.

  6. A State Space Modeling Approach to Mediation Analysis

    ERIC Educational Resources Information Center

    Gu, Fei; Preacher, Kristopher J.; Ferrer, Emilio

    2014-01-01

    Mediation is a causal process that evolves over time. Thus, a study of mediation requires data collected throughout the process. However, most applications of mediation analysis use cross-sectional rather than longitudinal data. Another implicit assumption commonly made in longitudinal designs for mediation analysis is that the same mediation…

  7. Evaluating a Skin Sensitization Model and Examining Common Assumptions of Skin Sensitizers (QSAR conference)

    EPA Science Inventory

    Skin sensitization is an adverse outcome that has been well studied over many decades. Knowledge of the mechanism of action was recently summarized using the Adverse Outcome Pathway (AOP) framework as part of the OECD work programme (OECD, 2012). Currently there is a strong focus...

  8. A Case Study in Conflict Management.

    ERIC Educational Resources Information Center

    Chase, Lawrence J.; Smith, Val R.

    This paper presents a model for a message-centered theory of human conflict based on the assumption that conflict will result from the pairing of any two functional messages that share a common antecedent but contain different consequences with oppositely signed affect. The paper first shows how to represent conflict situations diagrammatically…

  9. Solar energy market penetration models - Science or number mysticism

    NASA Technical Reports Server (NTRS)

    Warren, E. H., Jr.

    1980-01-01

    The forecast market potential of a solar technology is an important factor determining its R&D funding. Since solar energy market penetration models are the method used to forecast market potential, they have a pivotal role in a solar technology's development. This paper critiques the applicability of the most common solar energy market penetration models. It is argued that the assumptions underlying the foundations of rigorously developed models, or the absence of a reasonable foundation for the remaining models, restrict their applicability.

  10. Occupancy estimation and the closure assumption

    USGS Publications Warehouse

    Rota, Christopher T.; Fletcher, Robert J.; Dorazio, Robert M.; Betts, Matthew G.

    2009-01-01

    1. Recent advances in occupancy estimation that adjust for imperfect detection have provided substantial improvements over traditional approaches and are receiving considerable use in applied ecology. To estimate and adjust for detectability, occupancy modelling requires multiple surveys at a site and requires the assumption of 'closure' between surveys, i.e. no changes in occupancy between surveys. Violations of this assumption could bias parameter estimates; however, little work has assessed model sensitivity to violations of this assumption or how commonly such violations occur in nature. 2. We apply a modelling procedure that can test for closure to two avian point-count data sets in Montana and New Hampshire, USA, that exemplify time-scales at which closure is often assumed. These data sets illustrate different sampling designs that allow testing for closure but are currently rarely employed in field investigations. Using a simulation study, we then evaluate the sensitivity of parameter estimates to changes in site occupancy and evaluate a power analysis developed for sampling designs that is aimed at limiting the likelihood of closure. 3. Application of our approach to point-count data indicates that habitats may frequently be open to changes in site occupancy at time-scales typical of many occupancy investigations, with 71% and 100% of species investigated in Montana and New Hampshire respectively, showing violation of closure across time periods of 3 weeks and 8 days respectively. 4. Simulations suggest that models assuming closure are sensitive to changes in occupancy. Power analyses further suggest that the modelling procedure we apply can effectively test for closure. 5. Synthesis and applications. Our demonstration that sites may be open to changes in site occupancy over time-scales typical of many occupancy investigations, combined with the sensitivity of models to violations of the closure assumption, highlights the importance of properly addressing the closure assumption in both sampling designs and analysis. Furthermore, inappropriately applying closed models could have negative consequences when monitoring rare or declining species for conservation and management decisions, because violations of closure typically lead to overestimates of the probability of occurrence.

  11. A Bootstrap Algorithm for Mixture Models and Interval Data in Inter-Comparisons

    DTIC Science & Technology

    2001-07-01

    parametric bootstrap. The present algorithm will be applied to a thermometric inter-comparison, where data cannot be assumed to be normally distributed. 2 Data...experimental methods, used in each laboratory) often imply that the statistical assumptions are not satisfied, as for example in several thermometric ...triangular). Indeed, in thermometric experiments these three probabilistic models can represent several common stochastic variabilities for

  12. Impact of unseen assumptions on communication of atmospheric carbon mitigation options

    NASA Astrophysics Data System (ADS)

    Elliot, T. R.; Celia, M. A.; Court, B.

    2010-12-01

    With the rapid access and dissemination of information made available through online and digital pathways, there is need for a concurrent openness and transparency in communication of scientific investigation. Even with open communication it is essential that the scientific community continue to provide impartial result-driven information. An unknown factor in climate literacy is the influence of an impartial presentation of scientific investigation that has utilized biased base-assumptions. A formal publication appendix, and additional digital material, provides active investigators a suitable framework and ancillary material to make informed statements weighted by assumptions made in a study. However, informal media and rapid communiqués rarely make such investigatory attempts, often citing headline or key phrasing within a written work. This presentation is focused on Geologic Carbon Sequestration (GCS) as a proxy for the wider field of climate science communication, wherein we primarily investigate recent publications in GCS literature that produce scenario outcomes using apparently biased pro- or con- assumptions. A general review of scenario economics, capture process efficacy and specific examination of sequestration site assumptions and processes, reveals an apparent misrepresentation of what we consider to be a base-case GCS system. The authors demonstrate the influence of the apparent bias in primary assumptions on results from commonly referenced subsurface hydrology models. By use of moderate semi-analytical model simplification and Monte Carlo analysis of outcomes, we can establish the likely reality of any GCS scenario within a pragmatic middle ground. Secondarily, we review the development of publically available web-based computational tools and recent workshops where we presented interactive educational opportunities for public and institutional participants, with the goal of base-assumption awareness playing a central role. Through a series of interactive ‘what if’ scenarios, workshop participants were able to customize the models, which continue to be available from the Princeton University Subsurface Hydrology Research Group, and develop a better comprehension of subsurface factors contributing to GCS. Considering that the models are customizable, a simplified mock-up of regional GCS scenarios can be developed, which provides a possible pathway for informal, industrial, scientific or government communication of GCS concepts and likely scenarios. We believe continued availability, customizable scenarios, and simplifying assumptions are an exemplary means to communicate the possible outcome of CO2 sequestration projects; the associated risk; and, of no small importance, the consequences of base assumptions on predicted outcome.

  13. Application of random survival forests in understanding the determinants of under-five child mortality in Uganda in the presence of covariates that satisfy the proportional and non-proportional hazards assumption.

    PubMed

    Nasejje, Justine B; Mwambi, Henry

    2017-09-07

    Uganda just like any other Sub-Saharan African country, has a high under-five child mortality rate. To inform policy on intervention strategies, sound statistical methods are required to critically identify factors strongly associated with under-five child mortality rates. The Cox proportional hazards model has been a common choice in analysing data to understand factors strongly associated with high child mortality rates taking age as the time-to-event variable. However, due to its restrictive proportional hazards (PH) assumption, some covariates of interest which do not satisfy the assumption are often excluded in the analysis to avoid mis-specifying the model. Otherwise using covariates that clearly violate the assumption would mean invalid results. Survival trees and random survival forests are increasingly becoming popular in analysing survival data particularly in the case of large survey data and could be attractive alternatives to models with the restrictive PH assumption. In this article, we adopt random survival forests which have never been used in understanding factors affecting under-five child mortality rates in Uganda using Demographic and Health Survey data. Thus the first part of the analysis is based on the use of the classical Cox PH model and the second part of the analysis is based on the use of random survival forests in the presence of covariates that do not necessarily satisfy the PH assumption. Random survival forests and the Cox proportional hazards model agree that the sex of the household head, sex of the child, number of births in the past 1 year are strongly associated to under-five child mortality in Uganda given all the three covariates satisfy the PH assumption. Random survival forests further demonstrated that covariates that were originally excluded from the earlier analysis due to violation of the PH assumption were important in explaining under-five child mortality rates. These covariates include the number of children under the age of five in a household, number of births in the past 5 years, wealth index, total number of children ever born and the child's birth order. The results further indicated that the predictive performance for random survival forests built using covariates including those that violate the PH assumption was higher than that for random survival forests built using only covariates that satisfy the PH assumption. Random survival forests are appealing methods in analysing public health data to understand factors strongly associated with under-five child mortality rates especially in the presence of covariates that violate the proportional hazards assumption.

  14. Validation of abundance estimates from mark–recapture and removal techniques for rainbow trout captured by electrofishing in small streams

    USGS Publications Warehouse

    Rosenberger, Amanda E.; Dunham, Jason B.

    2005-01-01

    Estimation of fish abundance in streams using the removal model or the Lincoln - Peterson mark - recapture model is a common practice in fisheries. These models produce misleading results if their assumptions are violated. We evaluated the assumptions of these two models via electrofishing of rainbow trout Oncorhynchus mykiss in central Idaho streams. For one-, two-, three-, and four-pass sampling effort in closed sites, we evaluated the influences of fish size and habitat characteristics on sampling efficiency and the accuracy of removal abundance estimates. We also examined the use of models to generate unbiased estimates of fish abundance through adjustment of total catch or biased removal estimates. Our results suggested that the assumptions of the mark - recapture model were satisfied and that abundance estimates based on this approach were unbiased. In contrast, the removal model assumptions were not met. Decreasing sampling efficiencies over removal passes resulted in underestimated population sizes and overestimates of sampling efficiency. This bias decreased, but was not eliminated, with increased sampling effort. Biased removal estimates based on different levels of effort were highly correlated with each other but were less correlated with unbiased mark - recapture estimates. Stream size decreased sampling efficiency, and stream size and instream wood increased the negative bias of removal estimates. We found that reliable estimates of population abundance could be obtained from models of sampling efficiency for different levels of effort. Validation of abundance estimates requires extra attention to routine sampling considerations but can help fisheries biologists avoid pitfalls associated with biased data and facilitate standardized comparisons among studies that employ different sampling methods.

  15. The Use of Object-Oriented Analysis Methods in Surety Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Craft, Richard L.; Funkhouser, Donald R.; Wyss, Gregory D.

    1999-05-01

    Object-oriented analysis methods have been used in the computer science arena for a number of years to model the behavior of computer-based systems. This report documents how such methods can be applied to surety analysis. By embodying the causality and behavior of a system in a common object-oriented analysis model, surety analysts can make the assumptions that underlie their models explicit and thus better communicate with system designers. Furthermore, given minor extensions to traditional object-oriented analysis methods, it is possible to automatically derive a wide variety of traditional risk and reliability analysis methods from a single common object model. Automaticmore » model extraction helps ensure consistency among analyses and enables the surety analyst to examine a system from a wider variety of viewpoints in a shorter period of time. Thus it provides a deeper understanding of a system's behaviors and surety requirements. This report documents the underlying philosophy behind the common object model representation, the methods by which such common object models can be constructed, and the rules required to interrogate the common object model for derivation of traditional risk and reliability analysis models. The methodology is demonstrated in an extensive example problem.« less

  16. Problems in the Definition, Interpretation, and Evaluation of Genetic Heterogeneity

    PubMed Central

    Whittemore, Alice S.; Halpern, Jerry

    2001-01-01

    Suppose that we wish to classify families with multiple cases of disease into one of three categories: those that segregate mutations of a gene of interest, those which segregate mutations of other genes, and those whose disease is due to nonhereditary factors or chance. Among families in the first two categories (the hereditary families), we wish to estimate the proportion, p, of families that segregate mutations of the gene of interest. Although this proportion is a commonly accepted concept, it is well defined only with an unambiguous definition of “family.” Even then, extraneous factors such as family sizes and structures can cause p to vary across different populations and, within a population, to be estimated differently by different studies. Restrictive assumptions about the disease are needed, in order to avoid this undesirable variation. The assumptions require that mutations of all disease-causing genes (i) have no effect on family size, (ii) have very low frequencies, and (iii) have penetrances that satisfy certain constraints. Despite the unverifiability of these assumptions, linkage studies often invoke them to estimate p, using the admixture likelihood introduced by Smith and discussed by Ott. We argue against this common practice, because (1) it also requires the stronger assumption of equal penetrances for all etiologically relevant genes; (2) even if all assumptions are met, estimates of p are sensitive to misspecification of the unknown phenocopy rate; (3) even if all the necessary assumptions are met and the phenocopy rate is correctly specified, estimates of p that are obtained by linkage programs such as HOMOG and GENEHUNTER are based on the wrong likelihood and therefore are biased in the presence of phenocopies. We show how to correct these estimates; but, nevertheless, we do not recommend the use of parametric heterogeneity models in linkage analysis, even merely as a tool for increasing the statistical power to detect linkage. This is because the assumptions required by these models cannot be verified, and their violation could actually decrease power. Instead, we suggest that estimation of p be postponed until the relevant genes have been identified. Then their frequencies and penetrances can be estimated on the basis of population-based samples and can be used to obtain more-robust estimates of p for specific populations. PMID:11170893

  17. On the ontological assumptions of the medical model of psychiatry: philosophical considerations and pragmatic tasks

    PubMed Central

    2010-01-01

    A common theme in the contemporary medical model of psychiatry is that pathophysiological processes are centrally involved in the explanation, evaluation, and treatment of mental illnesses. Implied in this perspective is that clinical descriptors of these pathophysiological processes are sufficient to distinguish underlying etiologies. Psychiatric classification requires differentiation between what counts as normality (i.e.- order), and what counts as abnormality (i.e.- disorder). The distinction(s) between normality and pathology entail assumptions that are often deeply presupposed, manifesting themselves in statements about what mental disorders are. In this paper, we explicate that realism, naturalism, reductionism, and essentialism are core ontological assumptions of the medical model of psychiatry. We argue that while naturalism, realism, and reductionism can be reconciled with advances in contemporary neuroscience, essentialism - as defined to date - may be conceptually problematic, and we pose an eidetic construct of bio-psychosocial order and disorder based upon complex systems' dynamics. However we also caution against the overuse of any theory, and claim that practical distinctions are important to the establishment of clinical thresholds. We opine that as we move ahead toward both a new edition of the Diagnostic and Statistical Manual, and a proposed Decade of the Mind, the task at hand is to re-visit nosologic and ontologic assumptions pursuant to a re-formulation of diagnostic criteria and practice. PMID:20109176

  18. Caveats for correlative species distribution modeling

    USGS Publications Warehouse

    Jarnevich, Catherine S.; Stohlgren, Thomas J.; Kumar, Sunil; Morisette, Jeffrey T.; Holcombe, Tracy R.

    2015-01-01

    Correlative species distribution models are becoming commonplace in the scientific literature and public outreach products, displaying locations, abundance, or suitable environmental conditions for harmful invasive species, threatened and endangered species, or species of special concern. Accurate species distribution models are useful for efficient and adaptive management and conservation, research, and ecological forecasting. Yet, these models are often presented without fully examining or explaining the caveats for their proper use and interpretation and are often implemented without understanding the limitations and assumptions of the model being used. We describe common pitfalls, assumptions, and caveats of correlative species distribution models to help novice users and end users better interpret these models. Four primary caveats corresponding to different phases of the modeling process, each with supporting documentation and examples, include: (1) all sampling data are incomplete and potentially biased; (2) predictor variables must capture distribution constraints; (3) no single model works best for all species, in all areas, at all spatial scales, and over time; and (4) the results of species distribution models should be treated like a hypothesis to be tested and validated with additional sampling and modeling in an iterative process.

  19. Conclusion: Agency in the face of complexity and the future of assumption-aware evaluation practice.

    PubMed

    Morrow, Nathan; Nkwake, Apollo M

    2016-12-01

    This final chapter in the volume pulls together common themes from the diverse set of articles by a group of eight authors in this issue, and presents some reflections on the next steps for improving the ways in which evaluators work with assumptions. Collectively, the authors provide a broad overview of existing and emerging approaches to the articulation and use of assumptions in evaluation theory and practice. The authors reiterate the rationale and key terminology as a common basis for working with assumption in program design and evaluation. They highlight some useful concepts and categorizations to promote more rigorous treatment of assumptions in evaluation. A three-tier framework for fostering agency for assumption-aware evaluation practice is proposed-agency for themselves (evaluators); agency for others (stakeholders); and agency for standards and principles. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Institutional Autonomy and Academic Freedom in the Nordic Context--Similarities and Differences

    ERIC Educational Resources Information Center

    Nokkala, Terhi; Bladh, Agneta

    2014-01-01

    Owing to their common history, similarities in language and culture, long traditions in political collaboration and the shared Nordic societal model, an assumption is often made that the operational and regulatory context of universities is similar in the five Nordic countries: Denmark, Finland, Iceland, Norway and Sweden. In this article, we…

  1. Using Rasch Analysis to Identify Uncharacteristic Responses to Undergraduate Assessments

    ERIC Educational Resources Information Center

    Edwards, Antony; Alcock, Lara

    2010-01-01

    Rasch Analysis is a statistical technique that is commonly used to analyse both test data and Likert survey data, to construct and evaluate question item banks, and to evaluate change in longitudinal studies. In this article, we introduce the dichotomous Rasch model, briefly discussing its assumptions. Then, using data collected in an…

  2. Policy-Relevant Nonconvexities in the Production of Multiple Forest Benefits?

    Treesearch

    Stephen K. Swallow; Peter J. Parks; David N. Wear

    1990-01-01

    This paper challenges common assumptions about convexity in forest rotation models which optimize timber plus nontimber benefits. If a local optimum occurs earlier than the globally optimal age, policy based on marginal incentives may achieve suboptimal results. Policy-relevant nonconvexities are more likely if (i) nontimber benefits dominate for young stands while...

  3. Calibration of Response Data Using MIRT Models with Simple and Mixed Structures

    ERIC Educational Resources Information Center

    Zhang, Jinming

    2012-01-01

    It is common to assume during a statistical analysis of a multiscale assessment that the assessment is composed of several unidimensional subtests or that it has simple structure. Under this assumption, the unidimensional and multidimensional approaches can be used to estimate item parameters. These two approaches are equivalent in parameter…

  4. Are They Not All the Same? Racial Heterogeneity among Black Male Undergraduates

    ERIC Educational Resources Information Center

    Harper, Shaun R.; Nichols, Andrew H.

    2008-01-01

    An erroneous assumption is often made that Black men, one of the most stereotyped groups on college and university campuses, all share common experiences and backgrounds. Using Celious and Oyserman's (2001) Heterogeneous Race Model as a conceptual framework, we explored within-group differences among Black male undergraduates at three private…

  5. Evaluating a Skin Sensitization Model and Examining Common Assumptions of Skin Sensitizers (ASCCT meeting)

    EPA Science Inventory

    Skin sensitization is an adverse outcome that has been well studied over many decades. It was summarized using the adverse outcome pathway (AOP) framework as part of the OECD work programme (OECD, 2012). Currently there is a strong focus on how AOPs can be applied for different r...

  6. Mixed infections reveal virulence differences between host-specific bee pathogens.

    PubMed

    Klinger, Ellen G; Vojvodic, Svjetlana; DeGrandi-Hoffman, Gloria; Welker, Dennis L; James, Rosalind R

    2015-07-01

    Dynamics of host-pathogen interactions are complex, often influencing the ecology, evolution and behavior of both the host and pathogen. In the natural world, infections with multiple pathogens are common, yet due to their complexity, interactions can be difficult to predict and study. Mathematical models help facilitate our understanding of these evolutionary processes, but empirical data are needed to test model assumptions and predictions. We used two common theoretical models regarding mixed infections (superinfection and co-infection) to determine which model assumptions best described a group of fungal pathogens closely associated with bees. We tested three fungal species, Ascosphaera apis, Ascosphaera aggregata and Ascosphaera larvis, in two bee hosts (Apis mellifera and Megachile rotundata). Bee survival was not significantly different in mixed infections vs. solo infections with the most virulent pathogen for either host, but fungal growth within the host was significantly altered by mixed infections. In the host A. mellifera, only the most virulent pathogen was present in the host post-infection (indicating superinfective properties). In M. rotundata, the most virulent pathogen co-existed with the lesser-virulent one (indicating co-infective properties). We demonstrated that the competitive outcomes of mixed infections were host-specific, indicating strong host specificity among these fungal bee pathogens. Published by Elsevier Inc.

  7. On the accuracy of personality judgment: a realistic approach.

    PubMed

    Funder, D C

    1995-10-01

    The "accuracy paradigm" for the study of personality judgment provides an important, new complement to the "error paradigm" that dominated this area of research for almost 2 decades. The present article introduces a specific approach within the accuracy paradigm called the Realistic Accuracy Model (RAM). RAM begins with the assumption that personality traits are real attributes of individuals. This assumption entails the use of a broad array of criteria for the evaluation of personality judgment and leads to a model that describes accuracy as a function of the availability, detection, and utilization of relevant behavioral cues. RAM provides a common explanation for basic moderators of accuracy, sheds light on how these moderators interact, and outlines a research agenda that includes the reintegration of the study of error with the study of accuracy.

  8. New paradoxes of risky decision making.

    PubMed

    Birnbaum, Michael H

    2008-04-01

    During the last 25 years, prospect theory and its successor, cumulative prospect theory, replaced expected utility as the dominant descriptive theories of risky decision making. Although these models account for the original Allais paradoxes, 11 new paradoxes show where prospect theories lead to self-contradiction or systematic false predictions. The new findings are consistent with and, in several cases, were predicted in advance by simple "configural weight" models in which probability-consequence branches are weighted by a function that depends on branch probability and ranks of consequences on discrete branches. Although they have some similarities to later models called "rank-dependent utility," configural weight models do not satisfy coalescing, the assumption that branches leading to the same consequence can be combined by adding their probabilities. Nor do they satisfy cancellation, the "independence" assumption that branches common to both alternatives can be removed. The transfer of attention exchange model, with parameters estimated from previous data, correctly predicts results with all 11 new paradoxes. Apparently, people do not frame choices as prospects but, instead, as trees with branches.

  9. Calculating system reliability with SRFYDO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morzinski, Jerome; Anderson - Cook, Christine M; Klamann, Richard M

    2010-01-01

    SRFYDO is a process for estimating reliability of complex systems. Using information from all applicable sources, including full-system (flight) data, component test data, and expert (engineering) judgment, SRFYDO produces reliability estimates and predictions. It is appropriate for series systems with possibly several versions of the system which share some common components. It models reliability as a function of age and up to 2 other lifecycle (usage) covariates. Initial output from its Exploratory Data Analysis mode consists of plots and numerical summaries so that the user can check data entry and model assumptions, and help determine a final form for themore » system model. The System Reliability mode runs a complete reliability calculation using Bayesian methodology. This mode produces results that estimate reliability at the component, sub-system, and system level. The results include estimates of uncertainty, and can predict reliability at some not-too-distant time in the future. This paper presents an overview of the underlying statistical model for the analysis, discusses model assumptions, and demonstrates usage of SRFYDO.« less

  10. Tests of multiplicative models in psychology: a case study using the unified theory of implicit attitudes, stereotypes, self-esteem, and self-concept.

    PubMed

    Blanton, Hart; Jaccard, James

    2006-01-01

    Theories that posit multiplicative relationships between variables are common in psychology. A. G. Greenwald et al. recently presented a theory that explicated relationships between group identification, group attitudes, and self-esteem. Their theory posits a multiplicative relationship between concepts when predicting a criterion variable. Greenwald et al. suggested analytic strategies to test their multiplicative model that researchers might assume are appropriate for testing multiplicative models more generally. The theory and analytic strategies of Greenwald et al. are used as a case study to show the strong measurement assumptions that underlie certain tests of multiplicative models. It is shown that the approach used by Greenwald et al. can lead to declarations of theoretical support when the theory is wrong as well as rejection of the theory when the theory is correct. A simple strategy for testing multiplicative models that makes weaker measurement assumptions than the strategy proposed by Greenwald et al. is suggested and discussed.

  11. Common-Sense Chemistry: The Use of Assumptions and Heuristics in Problem Solving

    ERIC Educational Resources Information Center

    Maeyer, Jenine Rachel

    2013-01-01

    Students experience difficulty learning and understanding chemistry at higher levels, often because of cognitive biases stemming from common sense reasoning constraints. These constraints can be divided into two categories: assumptions (beliefs held about the world around us) and heuristics (the reasoning strategies or rules used to build…

  12. Spreading dynamics on complex networks: a general stochastic approach.

    PubMed

    Noël, Pierre-André; Allard, Antoine; Hébert-Dufresne, Laurent; Marceau, Vincent; Dubé, Louis J

    2014-12-01

    Dynamics on networks is considered from the perspective of Markov stochastic processes. We partially describe the state of the system through network motifs and infer any missing data using the available information. This versatile approach is especially well adapted for modelling spreading processes and/or population dynamics. In particular, the generality of our framework and the fact that its assumptions are explicitly stated suggests that it could be used as a common ground for comparing existing epidemics models too complex for direct comparison, such as agent-based computer simulations. We provide many examples for the special cases of susceptible-infectious-susceptible and susceptible-infectious-removed dynamics (e.g., epidemics propagation) and we observe multiple situations where accurate results may be obtained at low computational cost. Our perspective reveals a subtle balance between the complex requirements of a realistic model and its basic assumptions.

  13. Sensitivity to imputation models and assumptions in receiver operating characteristic analysis with incomplete data

    PubMed Central

    Karakaya, Jale; Karabulut, Erdem; Yucel, Recai M.

    2015-01-01

    Modern statistical methods using incomplete data have been increasingly applied in a wide variety of substantive problems. Similarly, receiver operating characteristic (ROC) analysis, a method used in evaluating diagnostic tests or biomarkers in medical research, has also been increasingly popular problem in both its development and application. While missing-data methods have been applied in ROC analysis, the impact of model mis-specification and/or assumptions (e.g. missing at random) underlying the missing data has not been thoroughly studied. In this work, we study the performance of multiple imputation (MI) inference in ROC analysis. Particularly, we investigate parametric and non-parametric techniques for MI inference under common missingness mechanisms. Depending on the coherency of the imputation model with the underlying data generation mechanism, our results show that MI generally leads to well-calibrated inferences under ignorable missingness mechanisms. PMID:26379316

  14. Community Epidemiology of Risk and Adolescent Substance Use: Practical Questions for Enhancing Prevention

    PubMed Central

    2012-01-01

    To promote an effective approach to prevention, the community diagnosis model helps communities systematically assess and prioritize risk factors to guide the selection of preventive interventions. This increasingly widely used model relies primarily on individual-level research that links risk and protective factors to substance use outcomes. I discuss common assumptions in the translation of such research concerning the definition of risk factor elevation; the equivalence, independence, and stability of relations between risk factors and problem behaviors; and community differences in risk factors and risk factor–problem behavior relations. Exploring these assumptions could improve understanding of the relations of risk factors and substance use within and across communities and enhance the efficacy of the community diagnosis model. This approach can also be applied to other areas of public health where individual and community levels of risk and outcomes intersect. PMID:22390508

  15. On the galaxy-halo connection in the EAGLE simulation

    NASA Astrophysics Data System (ADS)

    Desmond, Harry; Mao, Yao-Yuan; Wechsler, Risa H.; Crain, Robert A.; Schaye, Joop

    2017-10-01

    Empirical models of galaxy formation require assumptions about the correlations between galaxy and halo properties. These may be calibrated against observations or inferred from physical models such as hydrodynamical simulations. In this Letter, we use the EAGLE simulation to investigate the correlation of galaxy size with halo properties. We motivate this analysis by noting that the common assumption of angular momentum partition between baryons and dark matter in rotationally supported galaxies overpredicts both the spread in the stellar mass-size relation and the anticorrelation of size and velocity residuals, indicating a problem with the galaxy-halo connection it implies. We find the EAGLE galaxy population to perform significantly better on both statistics, and trace this success to the weakness of the correlations of galaxy size with halo mass, concentration and spin at fixed stellar mass. Using these correlations in empirical models will enable fine-grained aspects of galaxy scalings to be matched.

  16. Convergence yet Continued Complexity: A Systematic Review and Critique of Health Economic Models of Relapsing-Remitting Multiple Sclerosis in the United Kingdom.

    PubMed

    Allen, Felicity; Montgomery, Stephen; Maruszczak, Maciej; Kusel, Jeanette; Adlard, Nicholas

    2015-09-01

    Several disease-modifying therapies have marketing authorizations for the treatment of relapsing-remitting multiple sclerosis (RRMS). Given their appraisal by the National Institute for Health and Care Excellence, the objective was to systematically identify and critically evaluate the structures and assumptions used in health economic models of disease-modifying therapies for RRMS in the United Kingdom. Embase, MEDLINE, The Cochrane Library, and the National Institute for Health and Care Excellence Web site were searched systematically on March 3, 2014, to identify articles relating to health economic models in RRMS with a UK perspective. Data sources, techniques, and assumptions of the included models were extracted, compared, and critically evaluated. Of 386 results, 26 full texts were evaluated, leading to the inclusion of 18 articles (relating to 12 models). Early models varied considerably in method and structure, but convergence over time toward a Markov model with states based on disability score, a 1-year cycle length, and a lifetime time horizon was apparent. Recent models also allowed for disability improvement within the natural history of the condition. Considerable variety remains, with increasing numbers of comparators, the need for treatment sequencing, and different assumptions around efficacy waning and treatment withdrawal. Despite convergence over time to a similar Markov structure, there are still significant discrepancies between health economic models of RRMS in the United Kingdom. Differing methods, assumptions, and data sources render the comparison of model implementation and results problematic. The commonly used Markov structure leads to problems such as incapability to deal with heterogeneous populations and multiplying complexity with the addition of treatment sequences; these would best be solved by using alternative models such as discrete event simulations. Copyright © 2015 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  17. Adaptive control: Myths and realities

    NASA Technical Reports Server (NTRS)

    Athans, M.; Valavani, L.

    1984-01-01

    It was found that all currently existing globally stable adaptive algorithms have three basic properties in common: positive realness of the error equation, square-integrability of the parameter adjustment law and, need for sufficient excitation for asymptotic parameter convergence. Of the three, the first property is of primary importance since it satisfies a sufficient condition for stabillity of the overall system, which is a baseline design objective. The second property has been instrumental in the proof of asymptotic error convergence to zero, while the third addresses the issue of parameter convergence. Positive-real error dynamics can be generated only if the relative degree (excess of poles over zeroes) of the process to be controlled is known exactly; this, in turn, implies perfect modeling. This and other assumptions, such as absence of nonminimum phase plant zeros on which the mathematical arguments are based, do not necessarily reflect properties of real systems. As a result, it is natural to inquire what happens to the designs under less than ideal assumptions. The issues arising from violation of the exact modeling assumption which is extremely restrictive in practice and impacts the most important system property, stability, are discussed.

  18. On the Black-Scholes European Option Pricing Model Robustness and Generality

    NASA Astrophysics Data System (ADS)

    Takada, Hellinton Hatsuo; de Oliveira Siqueira, José

    2008-11-01

    The common presentation of the widely known and accepted Black-Scholes European option pricing model explicitly imposes some restrictions such as the geometric Brownian motion assumption for the underlying stock price. In this paper, these usual restrictions are relaxed using maximum entropy principle of information theory, Pearson's distribution system, market frictionless and risk-neutrality theories to the calculation of a unique risk-neutral probability measure calibrated with market parameters.

  19. What's Love Got to Do with It? Rethinking Common Sense Assumptions

    ERIC Educational Resources Information Center

    Trachman, Matthew; Bluestone, Cheryl

    2005-01-01

    One of the most basic tasks in introductory social science classes is to get students to reexamine their common sense assumptions concerning human behavior. This article introduces a shared assignment developed for a learning community that paired an introductory sociology and psychology class. The assignment challenges students to rethink the…

  20. A selection model for accounting for publication bias in a full network meta-analysis.

    PubMed

    Mavridis, Dimitris; Welton, Nicky J; Sutton, Alex; Salanti, Georgia

    2014-12-30

    Copas and Shi suggested a selection model to explore the potential impact of publication bias via sensitivity analysis based on assumptions for the probability of publication of trials conditional on the precision of their results. Chootrakool et al. extended this model to three-arm trials but did not fully account for the implications of the consistency assumption, and their model is difficult to generalize for complex network structures with more than three treatments. Fitting these selection models within a frequentist setting requires maximization of a complex likelihood function, and identification problems are common. We have previously presented a Bayesian implementation of the selection model when multiple treatments are compared with a common reference treatment. We now present a general model suitable for complex, full network meta-analysis that accounts for consistency when adjusting results for publication bias. We developed a design-by-treatment selection model to describe the mechanism by which studies with different designs (sets of treatments compared in a trial) and precision may be selected for publication. We fit the model in a Bayesian setting because it avoids the numerical problems encountered in the frequentist setting, it is generalizable with respect to the number of treatments and study arms, and it provides a flexible framework for sensitivity analysis using external knowledge. Our model accounts for the additional uncertainty arising from publication bias more successfully compared to the standard Copas model or its previous extensions. We illustrate the methodology using a published triangular network for the failure of vascular graft or arterial patency. Copyright © 2014 John Wiley & Sons, Ltd.

  1. Job Skills of 90's Requires New Educational Model for ALL Students.

    ERIC Educational Resources Information Center

    Daggett, Willard R.

    1992-01-01

    This bulletin describes the changing nature of work and summarizes research that has sought to identify the skills that all high school graduates and adult learners should have. It challenges several common assumptions about what preparation is needed for the workplace and how effectively schools are delivering the necessary skills. It cites the…

  2. Being Smart about Gifted Education: A Guidebook for Parents and Educators (2nd Edition)

    ERIC Educational Resources Information Center

    Matthews, Dona J.; Foster, Joanne F.

    2009-01-01

    Written for both parents and educators who work with children of advanced abilities, the authors present practical strategies to identify and nurture exceptionally high ability in children. They promote the "mastery" (rather than the "mystery") model of gifted education, and challenge several common practices and assumptions. They offer ways to…

  3. Both Sides Now: Visualizing and Drawing with the Right and Left Hemispheres of the Brain

    ERIC Educational Resources Information Center

    Schiferl, E. I.

    2008-01-01

    Neuroscience research provides new models for understanding vision that challenge Betty Edwards' (1979, 1989, 1999) assumptions about right brain vision and common conventions of "realistic" drawing. Enlisting PET and fMRI technology, neuroscience documents how the brains of normal adults respond to images of recognizable objects and scenes.…

  4. The Impact of Item Position Change on Item Parameters and Common Equating Results under the 3PL Model

    ERIC Educational Resources Information Center

    Meyers, Jason L.; Murphy, Stephen; Goodman, Joshua; Turhan, Ahmet

    2012-01-01

    Operational testing programs employing item response theory (IRT) applications benefit from of the property of item parameter invariance whereby item parameter estimates obtained from one sample can be applied to other samples (when the underlying assumptions are satisfied). In theory, this feature allows for applications such as computer-adaptive…

  5. Statistical Analysis of Notational AFL Data Using Continuous Time Markov Chains

    PubMed Central

    Meyer, Denny; Forbes, Don; Clarke, Stephen R.

    2006-01-01

    Animal biologists commonly use continuous time Markov chain models to describe patterns of animal behaviour. In this paper we consider the use of these models for describing AFL football. In particular we test the assumptions for continuous time Markov chain models (CTMCs), with time, distance and speed values associated with each transition. Using a simple event categorisation it is found that a semi-Markov chain model is appropriate for this data. This validates the use of Markov Chains for future studies in which the outcomes of AFL matches are simulated. Key Points A comparison of four AFL matches suggests similarity in terms of transition probabilities for events and the mean times, distances and speeds associated with each transition. The Markov assumption appears to be valid. However, the speed, time and distance distributions associated with each transition are not exponential suggesting that semi-Markov model can be used to model and simulate play. Team identified events and directions associated with transitions are required to develop the model into a tool for the prediction of match outcomes. PMID:24357946

  6. Statistical Analysis of Notational AFL Data Using Continuous Time Markov Chains.

    PubMed

    Meyer, Denny; Forbes, Don; Clarke, Stephen R

    2006-01-01

    Animal biologists commonly use continuous time Markov chain models to describe patterns of animal behaviour. In this paper we consider the use of these models for describing AFL football. In particular we test the assumptions for continuous time Markov chain models (CTMCs), with time, distance and speed values associated with each transition. Using a simple event categorisation it is found that a semi-Markov chain model is appropriate for this data. This validates the use of Markov Chains for future studies in which the outcomes of AFL matches are simulated. Key PointsA comparison of four AFL matches suggests similarity in terms of transition probabilities for events and the mean times, distances and speeds associated with each transition.The Markov assumption appears to be valid.However, the speed, time and distance distributions associated with each transition are not exponential suggesting that semi-Markov model can be used to model and simulate play.Team identified events and directions associated with transitions are required to develop the model into a tool for the prediction of match outcomes.

  7. The equilibrium assumption is valid for the kinetic treatment of most time-dependent protein-modification reactions.

    PubMed Central

    Brocklehurst, K

    1979-01-01

    To facilitate mechanistic interpretation of the kinetics of time-dependent inhibition of enzymes and of similar protein modification reactions, it is important to know when the equilibrium assumption may be applied to the model: formula: (see text). The conventional criterion of quasi-equilibrium, k + 2 less than k-1, is not always easy to assess, particularly when k + 2 cannot be separately determined. It is demonstrated that the condition k + 2 less than k-1 is necessarily true, however, when the value of the apparent second-order rate constant for the modification reaction is much smaller than the value of k + 1. Since k + 1 is commonly at least 10(7)M-1.S-1 for substrates, it is probable that the equilibrium assumption may be properly applied to most irreversible inhibitions and modification reactions. PMID:518556

  8. Solar Irradiance Variability is Caused by the Magnetic Activity on the Solar Surface.

    PubMed

    Yeo, Kok Leng; Solanki, Sami K; Norris, Charlotte M; Beeck, Benjamin; Unruh, Yvonne C; Krivova, Natalie A

    2017-09-01

    The variation in the radiative output of the Sun, described in terms of solar irradiance, is important to climatology. A common assumption is that solar irradiance variability is driven by its surface magnetism. Verifying this assumption has, however, been hampered by the fact that models of solar irradiance variability based on solar surface magnetism have to be calibrated to observed variability. Making use of realistic three-dimensional magnetohydrodynamic simulations of the solar atmosphere and state-of-the-art solar magnetograms from the Solar Dynamics Observatory, we present a model of total solar irradiance (TSI) that does not require any such calibration. In doing so, the modeled irradiance variability is entirely independent of the observational record. (The absolute level is calibrated to the TSI record from the Total Irradiance Monitor.) The model replicates 95% of the observed variability between April 2010 and July 2016, leaving little scope for alternative drivers of solar irradiance variability at least over the time scales examined (days to years).

  9. Adjusting survival time estimates to account for treatment switching in randomized controlled trials--an economic evaluation context: methods, limitations, and recommendations.

    PubMed

    Latimer, Nicholas R; Abrams, Keith R; Lambert, Paul C; Crowther, Michael J; Wailoo, Allan J; Morden, James P; Akehurst, Ron L; Campbell, Michael J

    2014-04-01

    Treatment switching commonly occurs in clinical trials of novel interventions in the advanced or metastatic cancer setting. However, methods to adjust for switching have been used inconsistently and potentially inappropriately in health technology assessments (HTAs). We present recommendations on the use of methods to adjust survival estimates in the presence of treatment switching in the context of economic evaluations. We provide background on the treatment switching issue and summarize methods used to adjust for it in HTAs. We discuss the assumptions and limitations associated with adjustment methods and draw on results of a simulation study to make recommendations on their use. We demonstrate that methods used to adjust for treatment switching have important limitations and often produce bias in realistic scenarios. We present an analysis framework that aims to increase the probability that suitable adjustment methods can be identified on a case-by-case basis. We recommend that the characteristics of clinical trials, and the treatment switching mechanism observed within them, should be considered alongside the key assumptions of the adjustment methods. Key assumptions include the "no unmeasured confounders" assumption associated with the inverse probability of censoring weights (IPCW) method and the "common treatment effect" assumption associated with the rank preserving structural failure time model (RPSFTM). The limitations associated with switching adjustment methods such as the RPSFTM and IPCW mean that they are appropriate in different scenarios. In some scenarios, both methods may be prone to bias; "2-stage" methods should be considered, and intention-to-treat analyses may sometimes produce the least bias. The data requirements of adjustment methods also have important implications for clinical trialists.

  10. Regression-assisted deconvolution.

    PubMed

    McIntyre, Julie; Stefanski, Leonard A

    2011-06-30

    We present a semi-parametric deconvolution estimator for the density function of a random variable biX that is measured with error, a common challenge in many epidemiological studies. Traditional deconvolution estimators rely only on assumptions about the distribution of X and the error in its measurement, and ignore information available in auxiliary variables. Our method assumes the availability of a covariate vector statistically related to X by a mean-variance function regression model, where regression errors are normally distributed and independent of the measurement errors. Simulations suggest that the estimator achieves a much lower integrated squared error than the observed-data kernel density estimator when models are correctly specified and the assumption of normal regression errors is met. We illustrate the method using anthropometric measurements of newborns to estimate the density function of newborn length. Copyright © 2011 John Wiley & Sons, Ltd.

  11. Interpreting "Personality" Taxonomies: Why Previous Models Cannot Capture Individual-Specific Experiencing, Behaviour, Functioning and Development. Major Taxonomic Tasks Still Lay Ahead.

    PubMed

    Uher, Jana

    2015-12-01

    As science seeks to make generalisations, a science of individual peculiarities encounters intricate challenges. This article explores these challenges by applying the Transdisciplinary Philosophy-of-Science Paradigm for Research on Individuals (TPS-Paradigm) and by exploring taxonomic "personality" research as an example. Analyses of researchers' interpretations of the taxonomic "personality" models, constructs and data that have been generated in the field reveal widespread erroneous assumptions about the abilities of previous methodologies to appropriately represent individual-specificity in the targeted phenomena. These assumptions, rooted in everyday thinking, fail to consider that individual-specificity and others' minds cannot be directly perceived, that abstract descriptions cannot serve as causal explanations, that between-individual structures cannot be isomorphic to within-individual structures, and that knowledge of compositional structures cannot explain the process structures of their functioning and development. These erroneous assumptions and serious methodological deficiencies in widely used standardised questionnaires have effectively prevented psychologists from establishing taxonomies that can comprehensively model individual-specificity in most of the kinds of phenomena explored as "personality", especially in experiencing and behaviour and in individuals' functioning and development. Contrary to previous assumptions, it is not universal models but rather different kinds of taxonomic models that are required for each of the different kinds of phenomena, variations and structures that are commonly conceived of as "personality". Consequently, to comprehensively explore individual-specificity, researchers have to apply a portfolio of complementary methodologies and develop different kinds of taxonomies, most of which have yet to be developed. Closing, the article derives some meta-desiderata for future research on individuals' "personality".

  12. Applying Multivariate Discrete Distributions to Genetically Informative Count Data.

    PubMed

    Kirkpatrick, Robert M; Neale, Michael C

    2016-03-01

    We present a novel method of conducting biometric analysis of twin data when the phenotypes are integer-valued counts, which often show an L-shaped distribution. Monte Carlo simulation is used to compare five likelihood-based approaches to modeling: our multivariate discrete method, when its distributional assumptions are correct, when they are incorrect, and three other methods in common use. With data simulated from a skewed discrete distribution, recovery of twin correlations and proportions of additive genetic and common environment variance was generally poor for the Normal, Lognormal and Ordinal models, but good for the two discrete models. Sex-separate applications to substance-use data from twins in the Minnesota Twin Family Study showed superior performance of two discrete models. The new methods are implemented using R and OpenMx and are freely available.

  13. Inference of quantitative models of bacterial promoters from time-series reporter gene data.

    PubMed

    Stefan, Diana; Pinel, Corinne; Pinhal, Stéphane; Cinquemani, Eugenio; Geiselmann, Johannes; de Jong, Hidde

    2015-01-01

    The inference of regulatory interactions and quantitative models of gene regulation from time-series transcriptomics data has been extensively studied and applied to a range of problems in drug discovery, cancer research, and biotechnology. The application of existing methods is commonly based on implicit assumptions on the biological processes under study. First, the measurements of mRNA abundance obtained in transcriptomics experiments are taken to be representative of protein concentrations. Second, the observed changes in gene expression are assumed to be solely due to transcription factors and other specific regulators, while changes in the activity of the gene expression machinery and other global physiological effects are neglected. While convenient in practice, these assumptions are often not valid and bias the reverse engineering process. Here we systematically investigate, using a combination of models and experiments, the importance of this bias and possible corrections. We measure in real time and in vivo the activity of genes involved in the FliA-FlgM module of the E. coli motility network. From these data, we estimate protein concentrations and global physiological effects by means of kinetic models of gene expression. Our results indicate that correcting for the bias of commonly-made assumptions improves the quality of the models inferred from the data. Moreover, we show by simulation that these improvements are expected to be even stronger for systems in which protein concentrations have longer half-lives and the activity of the gene expression machinery varies more strongly across conditions than in the FliA-FlgM module. The approach proposed in this study is broadly applicable when using time-series transcriptome data to learn about the structure and dynamics of regulatory networks. In the case of the FliA-FlgM module, our results demonstrate the importance of global physiological effects and the active regulation of FliA and FlgM half-lives for the dynamics of FliA-dependent promoters.

  14. Complex Adaptive System Models and the Genetic Analysis of Plasma HDL-Cholesterol Concentration

    PubMed Central

    Rea, Thomas J.; Brown, Christine M.; Sing, Charles F.

    2006-01-01

    Despite remarkable advances in diagnosis and therapy, ischemic heart disease (IHD) remains a leading cause of morbidity and mortality in industrialized countries. Recent efforts to estimate the influence of genetic variation on IHD risk have focused on predicting individual plasma high-density lipoprotein cholesterol (HDL-C) concentration. Plasma HDL-C concentration (mg/dl), a quantitative risk factor for IHD, has a complex multifactorial etiology that involves the actions of many genes. Single gene variations may be necessary but are not individually sufficient to predict a statistically significant increase in risk of disease. The complexity of phenotype-genotype-environment relationships involved in determining plasma HDL-C concentration has challenged commonly held assumptions about genetic causation and has led to the question of which combination of variations, in which subset of genes, in which environmental strata of a particular population significantly improves our ability to predict high or low risk phenotypes. We document the limitations of inferences from genetic research based on commonly accepted biological models, consider how evidence for real-world dynamical interactions between HDL-C determinants challenges the simplifying assumptions implicit in traditional linear statistical genetic models, and conclude by considering research options for evaluating the utility of genetic information in predicting traits with complex etiologies. PMID:17146134

  15. Password-only authenticated three-party key exchange with provable security in the standard model.

    PubMed

    Nam, Junghyun; Choo, Kim-Kwang Raymond; Kim, Junghwan; Kang, Hyun-Kyu; Kim, Jinsoo; Paik, Juryon; Won, Dongho

    2014-01-01

    Protocols for password-only authenticated key exchange (PAKE) in the three-party setting allow two clients registered with the same authentication server to derive a common secret key from their individual password shared with the server. Existing three-party PAKE protocols were proven secure under the assumption of the existence of random oracles or in a model that does not consider insider attacks. Therefore, these protocols may turn out to be insecure when the random oracle is instantiated with a particular hash function or an insider attack is mounted against the partner client. The contribution of this paper is to present the first three-party PAKE protocol whose security is proven without any idealized assumptions in a model that captures insider attacks. The proof model we use is a variant of the indistinguishability-based model of Bellare, Pointcheval, and Rogaway (2000), which is one of the most widely accepted models for security analysis of password-based key exchange protocols. We demonstrated that our protocol achieves not only the typical indistinguishability-based security of session keys but also the password security against undetectable online dictionary attacks.

  16. Assessing the Role of the 'Unity Assumption' on Multisensory Integration: A Review.

    PubMed

    Chen, Yi-Chuan; Spence, Charles

    2017-01-01

    There has been longstanding interest from both experimental psychologists and cognitive neuroscientists in the potential modulatory role of various top-down factors on multisensory integration/perception in humans. One such top-down influence, often referred to in the literature as the 'unity assumption,' is thought to occur in those situations in which an observer considers that various of the unisensory stimuli that they have been presented with belong to one and the same object or event (Welch and Warren, 1980). Here, we review the possible factors that may lead to the emergence of the unity assumption. We then critically evaluate the evidence concerning the consequences of the unity assumption from studies of the spatial and temporal ventriloquism effects, from the McGurk effect, and from the Colavita visual dominance paradigm. The research that has been published to date using these tasks provides support for the claim that the unity assumption influences multisensory perception under at least a subset of experimental conditions. We then consider whether the notion has been superseded in recent years by the introduction of priors in Bayesian causal inference models of human multisensory perception. We suggest that the prior of common cause (that is, the prior concerning whether multisensory signals originate from the same source or not) offers the most useful way to quantify the unity assumption as a continuous cognitive variable.

  17. Are Formative Indicators Superfluous? An Extension of Aguirre-Urreta, Rönkkö, and Marakas Analysis

    ERIC Educational Resources Information Center

    Guyon, Hervé; Tensaout, Mouloud

    2016-01-01

    In this article, the authors extend the results of Aguirre-Urreta, Rönkkö, and Marakas (2016) concerning the omission of a relevant causal indicator by testing the validity of the assumption that causal indicators are entirely superfluous to the measurement model and discuss the implications for measurement theory. Contrary to common wisdom…

  18. Debt Profiles of Model Students: The Projected Debt of Highly Productive Students and Its Economic Impact

    ERIC Educational Resources Information Center

    Fincher, Mark E.

    2017-01-01

    A common misperception suggests that a high-achieving student can easily complete a degree with very limited debt, and that students with high levels of debt are thus underachievers. This assumption is supported by memories of previous decades when it was realistically possible for most students to work their way through college. This view,…

  19. Comparison of methods for estimating bird abundance and trends from historical count data

    Treesearch

    Frank R. Thompson; Frank A. La Sorte

    2008-01-01

    The use of bird counts as indices has come under increasing scrutiny because assumptions concerning detection probabilities may not be met, but there also seems to be some resistance to use of model-based approaches to estimating abundance. We used data from the United States Forest Service, Southern Region bird monitoring program to compare several common approaches...

  20. Understanding the Changing Faculty Workforce in Higher Education: A Comparison of Full-Time Non-Tenure Track and Tenure Line Experiences

    ERIC Educational Resources Information Center

    Ott, Molly; Cisneros, Jesus

    2015-01-01

    Non-tenure track faculty are a growing majority in American higher education, but research examining their work lives is limited. Moreover, the theoretical frameworks commonly used by scholars have been critiqued for reliance on ideologically charged assumptions. Using a conceptual model developed from Hackman and Oldham's (1980) Job…

  1. Improving estimates of subsurface gas transport in unsaturated fractured media using experimental Xe diffusion data and numerical methods

    NASA Astrophysics Data System (ADS)

    Ortiz, J. P.; Ortega, A. D.; Harp, D. R.; Boukhalfa, H.; Stauffer, P. H.

    2017-12-01

    Gas transport in unsaturated fractured media plays an important role in a variety of applications, including detection of underground nuclear explosions, transport from volatile contaminant plumes, shallow CO2 leakage from carbon sequestration sites, and methane leaks from hydraulic fracturing operations. Gas breakthrough times are highly sensitive to uncertainties associated with a variety of hydrogeologic parameters, including: rock type, fracture aperture, matrix permeability, porosity, and saturation. Furthermore, a couple simplifying assumptions are typically employed when representing fracture flow and transport. Aqueous phase transport is typically considered insignificant compared to gas phase transport in unsaturated fracture flow regimes, and an assumption of instantaneous dissolution/volatilization of radionuclide gas is commonly used to reduce computational expense. We conduct this research using a twofold approach that combines laboratory gas experimentation and numerical modeling to verify and refine these simplifying assumptions in our current models of gas transport. Using a gas diffusion cell, we are able to measure air pressure transmission through fractured tuff core samples while also measuring Xe gas breakthrough measured using a mass spectrometer. We can thus create synthetic barometric fluctuations akin to those observed in field tests and measure the associated gas flow through the fracture and matrix pore space for varying degrees of fluid saturation. We then attempt to reproduce the experimental results using numerical models in PLFOTRAN and FEHM codes to better understand the importance of different parameters and assumptions on gas transport. Our numerical approaches represent both single-phase gas flow with immobile water, as well as full multi-phase transport in order to test the validity of assuming immobile pore water. Our approaches also include the ability to simulate the reaction equilibrium kinetics of dissolution/volatilization in order to identify when the assumption of instantaneous equilibrium is reasonable. These efforts will aid us in our application of such models to larger, field-scale tests and improve our ability to predict gas breakthrough times.

  2. A close examination of double filtering with fold change and t test in microarray analysis

    PubMed Central

    2009-01-01

    Background Many researchers use the double filtering procedure with fold change and t test to identify differentially expressed genes, in the hope that the double filtering will provide extra confidence in the results. Due to its simplicity, the double filtering procedure has been popular with applied researchers despite the development of more sophisticated methods. Results This paper, for the first time to our knowledge, provides theoretical insight on the drawback of the double filtering procedure. We show that fold change assumes all genes to have a common variance while t statistic assumes gene-specific variances. The two statistics are based on contradicting assumptions. Under the assumption that gene variances arise from a mixture of a common variance and gene-specific variances, we develop the theoretically most powerful likelihood ratio test statistic. We further demonstrate that the posterior inference based on a Bayesian mixture model and the widely used significance analysis of microarrays (SAM) statistic are better approximations to the likelihood ratio test than the double filtering procedure. Conclusion We demonstrate through hypothesis testing theory, simulation studies and real data examples, that well constructed shrinkage testing methods, which can be united under the mixture gene variance assumption, can considerably outperform the double filtering procedure. PMID:19995439

  3. Neurobiological roots of language in primate audition: common computational properties.

    PubMed

    Bornkessel-Schlesewsky, Ina; Schlesewsky, Matthias; Small, Steven L; Rauschecker, Josef P

    2015-03-01

    Here, we present a new perspective on an old question: how does the neurobiology of human language relate to brain systems in nonhuman primates? We argue that higher-order language combinatorics, including sentence and discourse processing, can be situated in a unified, cross-species dorsal-ventral streams architecture for higher auditory processing, and that the functions of the dorsal and ventral streams in higher-order language processing can be grounded in their respective computational properties in primate audition. This view challenges an assumption, common in the cognitive sciences, that a nonhuman primate model forms an inherently inadequate basis for modeling higher-level language functions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Counter-intuitive quasi-periodic motion in the autonomous vibration of cracked Timoshenko beams

    NASA Astrophysics Data System (ADS)

    Brandon, J. A.; Abraham, O. N. L.

    1995-08-01

    The time domain behaviour of a cracked Timoshenko beam is constructed by alternation of two linear models corresponding to the open and closed condition of the crack. It might be expected that a response which is composed of the alternation of two systems with different properties would extinguish the periodicities of the constituent sub-models. The numerical studies presented illustrate the perpetuation of these features without showing any evidence for the creation of periodicities based on a common assumption of the mean period of a bilinear model.

  5. Recognising the Effects of Costing Assumptions in Educational Business Simulation Games

    ERIC Educational Resources Information Center

    Eckardt, Gordon; Selen, Willem; Wynder, Monte

    2015-01-01

    Business simulations are a powerful way to provide experiential learning that is focussed, controlled, and concentrated. Inherent in any simulation, however, are numerous assumptions that determine feedback, and hence the lessons learnt. In this conceptual paper we describe some common cost assumptions that are implicit in simulation design and…

  6. Unorthodox Thoughts on the Nature and Mission of Contemporary Educational Psychology.

    ERIC Educational Resources Information Center

    Salomon, Gavriel

    Two assumptions commonly held in educational psychology are questioned. According to the first assumption, mental states and processes are studied in isolation. According to the second assumption, an individuals' psychology, that which is of relevance to education, is often studied out of social and cultural context, rendering suspect explanations…

  7. Between-litter variation in developmental studies of hormones and behavior: Inflated false positives and diminished power.

    PubMed

    Williams, Donald R; Carlsson, Rickard; Bürkner, Paul-Christian

    2017-10-01

    Developmental studies of hormones and behavior often include littermates-rodent siblings that share early-life experiences and genes. Due to between-litter variation (i.e., litter effects), the statistical assumption of independent observations is untenable. In two literatures-natural variation in maternal care and prenatal stress-entire litters are categorized based on maternal behavior or experimental condition. Here, we (1) review both literatures; (2) simulate false positive rates for commonly used statistical methods in each literature; and (3) characterize small sample performance of multilevel models (MLM) and generalized estimating equations (GEE). We found that the assumption of independence was routinely violated (>85%), false positives (α=0.05) exceeded nominal levels (up to 0.70), and power (1-β) rarely surpassed 0.80 (even for optimistic sample and effect sizes). Additionally, we show that MLMs and GEEs have adequate performance for common research designs. We discuss implications for the extant literature, the field of behavioral neuroendocrinology, and provide recommendations. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Descriptive Statistics for Modern Test Score Distributions: Skewness, Kurtosis, Discreteness, and Ceiling Effects.

    PubMed

    Ho, Andrew D; Yu, Carol C

    2015-06-01

    Many statistical analyses benefit from the assumption that unconditional or conditional distributions are continuous and normal. More than 50 years ago in this journal, Lord and Cook chronicled departures from normality in educational tests, and Micerri similarly showed that the normality assumption is met rarely in educational and psychological practice. In this article, the authors extend these previous analyses to state-level educational test score distributions that are an increasingly common target of high-stakes analysis and interpretation. Among 504 scale-score and raw-score distributions from state testing programs from recent years, nonnormal distributions are common and are often associated with particular state programs. The authors explain how scaling procedures from item response theory lead to nonnormal distributions as well as unusual patterns of discreteness. The authors recommend that distributional descriptive statistics be calculated routinely to inform model selection for large-scale test score data, and they illustrate consequences of nonnormality using sensitivity studies that compare baseline results to those from normalized score scales.

  9. Modeling approaches in avian conservation and the role of field biologists

    USGS Publications Warehouse

    Beissinger, Steven R.; Walters, J.R.; Catanzaro, D.G.; Smith, Kimberly G.; Dunning, J.B.; Haig, Susan M.; Noon, Barry; Stith, Bradley M.

    2006-01-01

    This review grew out of our realization that models play an increasingly important role in conservation but are rarely used in the research of most avian biologists. Modelers are creating models that are more complex and mechanistic and that can incorporate more of the knowledge acquired by field biologists. Such models require field biologists to provide more specific information, larger sample sizes, and sometimes new kinds of data, such as habitat-specific demography and dispersal information. Field biologists need to support model development by testing key model assumptions and validating models. The best conservation decisions will occur where cooperative interaction enables field biologists, modelers, statisticians, and managers to contribute effectively. We begin by discussing the general form of ecological models—heuristic or mechanistic, "scientific" or statistical—and then highlight the structure, strengths, weaknesses, and applications of six types of models commonly used in avian conservation: (1) deterministic single-population matrix models, (2) stochastic population viability analysis (PVA) models for single populations, (3) metapopulation models, (4) spatially explicit models, (5) genetic models, and (6) species distribution models. We end by considering their unique attributes, determining whether the assumptions that underlie the structure are valid, and testing the ability of the model to predict the future correctly.

  10. Using plot experiments to test the validity of mass balance models employed to estimate soil redistribution rates from 137Cs and 210Pb(ex) measurements.

    PubMed

    Porto, Paolo; Walling, Des E

    2012-10-01

    Information on rates of soil loss from agricultural land is a key requirement for assessing both on-site soil degradation and potential off-site sediment problems. Many models and prediction procedures have been developed to estimate rates of soil loss and soil redistribution as a function of the local topography, hydrometeorology, soil type and land management, but empirical data remain essential for validating and calibrating such models and prediction procedures. Direct measurements using erosion plots are, however, costly and the results obtained relate to a small enclosed area, which may not be representative of the wider landscape. In recent years, the use of fallout radionuclides and more particularly caesium-137 ((137)Cs) and excess lead-210 ((210)Pb(ex)) has been shown to provide a very effective means of documenting rates of soil loss and soil and sediment redistribution in the landscape. Several of the assumptions associated with the theoretical conversion models used with such measurements remain essentially unvalidated. This contribution describes the results of a measurement programme involving five experimental plots located in southern Italy, aimed at validating several of the basic assumptions commonly associated with the use of mass balance models for estimating rates of soil redistribution on cultivated land from (137)Cs and (210)Pb(ex) measurements. Overall, the results confirm the general validity of these assumptions and the importance of taking account of the fate of fresh fallout. However, further work is required to validate the conversion models employed in using fallout radionuclide measurements to document soil redistribution in the landscape and this could usefully direct attention to different environments and to the validation of the final estimates of soil redistribution rate as well as the assumptions of the models employed. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. Validity of the semi-infinite tumor model in diffuse reflectance spectroscopy for epithelial cancer diagnosis: a Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Zhu, Caigang; Liu, Quan

    2011-08-01

    The accurate understanding of optical properties of human tissues plays an important role in the optical diagnosis of early epithelial cancer. Many inverse models used to determine the optical properties of a tumor have assumed that the tumor was semi-infinite, which infers infinite width and length but finite thickness. However, this simplified assumption could lead to large errors for small tumor, especially at the early stages. We used a modified Monte Carlo code, which is able to simulate light transport in a layered tissue model with buried tumor-like targets, to investigate the validity of the semi-infinite tumor assumption in two common epithelial tissue models: a squamous cell carcinoma (SCC) tissue model and a basal cell carcinoma (BCC) tissue model. The SCC tissue model consisted of three layers, i.e. the top epithelium, the middle tumor and the bottom stroma. The BCC tissue model also consisted of three layers, i.e. the top epidermis, the middle tumor and the bottom dermis. Diffuse reflectance was simulated for two common fiber-optic probes. In one probe, both source and detector fibers were perpendicular to the tissue surface; while in the other, both fibers were tilted at 45 degrees relative to the normal axis of the tissue surface. It was demonstrated that the validity of the semi-infinite tumor model depends on both the fiber-optic probe configuration and the tumor dimensions. Two look-up tables, which relate the validity of the semi-infinite tumor model to the tumor width in terms of the source-detector separation, were derived to guide the selection of appropriate tumor models and fiber optic probe configuration for the optical diagnosis of early epithelial cancers.

  12. Robust Linear Models for Cis-eQTL Analysis.

    PubMed

    Rantalainen, Mattias; Lindgren, Cecilia M; Holmes, Christopher C

    2015-01-01

    Expression Quantitative Trait Loci (eQTL) analysis enables characterisation of functional genetic variation influencing expression levels of individual genes. In outbread populations, including humans, eQTLs are commonly analysed using the conventional linear model, adjusting for relevant covariates, assuming an allelic dosage model and a Gaussian error term. However, gene expression data generally have noise that induces heavy-tailed errors relative to the Gaussian distribution and often include atypical observations, or outliers. Such departures from modelling assumptions can lead to an increased rate of type II errors (false negatives), and to some extent also type I errors (false positives). Careful model checking can reduce the risk of type-I errors but often not type II errors, since it is generally too time-consuming to carefully check all models with a non-significant effect in large-scale and genome-wide studies. Here we propose the application of a robust linear model for eQTL analysis to reduce adverse effects of deviations from the assumption of Gaussian residuals. We present results from a simulation study as well as results from the analysis of real eQTL data sets. Our findings suggest that in many situations robust models have the potential to provide more reliable eQTL results compared to conventional linear models, particularly in respect to reducing type II errors due to non-Gaussian noise. Post-genomic data, such as that generated in genome-wide eQTL studies, are often noisy and frequently contain atypical observations. Robust statistical models have the potential to provide more reliable results and increased statistical power under non-Gaussian conditions. The results presented here suggest that robust models should be considered routinely alongside other commonly used methodologies for eQTL analysis.

  13. Estimation of Survival Probabilities for Use in Cost-effectiveness Analyses: A Comparison of a Multi-state Modeling Survival Analysis Approach with Partitioned Survival and Markov Decision-Analytic Modeling

    PubMed Central

    Williams, Claire; Lewsey, James D.; Mackay, Daniel F.; Briggs, Andrew H.

    2016-01-01

    Modeling of clinical-effectiveness in a cost-effectiveness analysis typically involves some form of partitioned survival or Markov decision-analytic modeling. The health states progression-free, progression and death and the transitions between them are frequently of interest. With partitioned survival, progression is not modeled directly as a state; instead, time in that state is derived from the difference in area between the overall survival and the progression-free survival curves. With Markov decision-analytic modeling, a priori assumptions are often made with regard to the transitions rather than using the individual patient data directly to model them. This article compares a multi-state modeling survival regression approach to these two common methods. As a case study, we use a trial comparing rituximab in combination with fludarabine and cyclophosphamide v. fludarabine and cyclophosphamide alone for the first-line treatment of chronic lymphocytic leukemia. We calculated mean Life Years and QALYs that involved extrapolation of survival outcomes in the trial. We adapted an existing multi-state modeling approach to incorporate parametric distributions for transition hazards, to allow extrapolation. The comparison showed that, due to the different assumptions used in the different approaches, a discrepancy in results was evident. The partitioned survival and Markov decision-analytic modeling deemed the treatment cost-effective with ICERs of just over £16,000 and £13,000, respectively. However, the results with the multi-state modeling were less conclusive, with an ICER of just over £29,000. This work has illustrated that it is imperative to check whether assumptions are realistic, as different model choices can influence clinical and cost-effectiveness results. PMID:27698003

  14. Estimation of Survival Probabilities for Use in Cost-effectiveness Analyses: A Comparison of a Multi-state Modeling Survival Analysis Approach with Partitioned Survival and Markov Decision-Analytic Modeling.

    PubMed

    Williams, Claire; Lewsey, James D; Mackay, Daniel F; Briggs, Andrew H

    2017-05-01

    Modeling of clinical-effectiveness in a cost-effectiveness analysis typically involves some form of partitioned survival or Markov decision-analytic modeling. The health states progression-free, progression and death and the transitions between them are frequently of interest. With partitioned survival, progression is not modeled directly as a state; instead, time in that state is derived from the difference in area between the overall survival and the progression-free survival curves. With Markov decision-analytic modeling, a priori assumptions are often made with regard to the transitions rather than using the individual patient data directly to model them. This article compares a multi-state modeling survival regression approach to these two common methods. As a case study, we use a trial comparing rituximab in combination with fludarabine and cyclophosphamide v. fludarabine and cyclophosphamide alone for the first-line treatment of chronic lymphocytic leukemia. We calculated mean Life Years and QALYs that involved extrapolation of survival outcomes in the trial. We adapted an existing multi-state modeling approach to incorporate parametric distributions for transition hazards, to allow extrapolation. The comparison showed that, due to the different assumptions used in the different approaches, a discrepancy in results was evident. The partitioned survival and Markov decision-analytic modeling deemed the treatment cost-effective with ICERs of just over £16,000 and £13,000, respectively. However, the results with the multi-state modeling were less conclusive, with an ICER of just over £29,000. This work has illustrated that it is imperative to check whether assumptions are realistic, as different model choices can influence clinical and cost-effectiveness results.

  15. Pairing field methods to improve inference in wildlife surveys while accommodating detection covariance

    USGS Publications Warehouse

    Clare, John; McKinney, Shawn T.; DePue, John E.; Loftin, Cynthia S.

    2017-01-01

    It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture–recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters.

  16. Comparison of hydrochemical tracers to estimate source contributions to peak flow in a small, forested, headwater catchment

    USGS Publications Warehouse

    Rice, Karen C.; Hornberger, George M.

    1998-01-01

    Three-component (throughfall, soil water, groundwater) hydrograph separations at peak flow were performed on 10 storms over a 2-year period in a small forested catchment in north-central Maryland using an iterative and an exact solution. Seven pairs of tracers (deuterium and oxygen 18, deuterium and chloride, deuterium and sodium, deuterium and silica, chloride and silica, chloride and sodium, and sodium and silica) were used for three-component hydrograph separation for each storm at peak flow to determine whether or not the assumptions of hydrograph separation routinely can be met, to assess the adequacy of some commonly used tracers, to identify patterns in hydrograph-separation results, and to develop conceptual models for the patterns observed. Results of the three-component separations were not always physically meaningful, suggesting that assumptions of hydrograph separation had been violated. Uncertainties in solutions to equations for hydrograph separations were large, partly as a result of violations of assumptions used in deriving the separation equations and partly as a result of improper identification of chemical compositions of end-members. Results of three-component separations using commonly used tracers were widely variable. Consistent patterns in the amount of subsurface water contributing to peak flow (45-100%) were observed, no matter which separation method or combination of tracers was used. A general conceptual model for the sequence of contributions from the three end-members could be developed for 9 of the 10 storms. Overall results indicated that hydrochemical and hydrometric measurements need to be coupled in order to perform meaningful hydrograph separations.

  17. Plant uptake of elements in soil and pore water: field observations versus model assumptions.

    PubMed

    Raguž, Veronika; Jarsjö, Jerker; Grolander, Sara; Lindborg, Regina; Avila, Rodolfo

    2013-09-15

    Contaminant concentrations in various edible plant parts transfer hazardous substances from polluted areas to animals and humans. Thus, the accurate prediction of plant uptake of elements is of significant importance. The processes involved contain many interacting factors and are, as such, complex. In contrast, the most common way to currently quantify element transfer from soils into plants is relatively simple, using an empirical soil-to-plant transfer factor (TF). This practice is based on theoretical assumptions that have been previously shown to not generally be valid. Using field data on concentrations of 61 basic elements in spring barley, soil and pore water at four agricultural sites in mid-eastern Sweden, we quantify element-specific TFs. Our aim is to investigate to which extent observed element-specific uptake is consistent with TF model assumptions and to which extent TF's can be used to predict observed differences in concentrations between different plant parts (root, stem and ear). Results show that for most elements, plant-ear concentrations are not linearly related to bulk soil concentrations, which is congruent with previous studies. This behaviour violates a basic TF model assumption of linearity. However, substantially better linear correlations are found when weighted average element concentrations in whole plants are used for TF estimation. The highest number of linearly-behaving elements was found when relating average plant concentrations to soil pore-water concentrations. In contrast to other elements, essential elements (micronutrients and macronutrients) exhibited relatively small differences in concentration between different plant parts. Generally, the TF model was shown to work reasonably well for micronutrients, whereas it did not for macronutrients. The results also suggest that plant uptake of elements from sources other than the soil compartment (e.g. from air) may be non-negligible. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Review of Integrated Noise Model (INM) Equations and Processes

    NASA Technical Reports Server (NTRS)

    Shepherd, Kevin P. (Technical Monitor); Forsyth, David W.; Gulding, John; DiPardo, Joseph

    2003-01-01

    The FAA's Integrated Noise Model (INM) relies on the methods of the SAE AIR-1845 'Procedure for the Calculation of Airplane Noise in the Vicinity of Airports' issued in 1986. Simplifying assumptions for aerodynamics and noise calculation were made in the SAE standard and the INM based on the limited computing power commonly available then. The key objectives of this study are 1) to test some of those assumptions against Boeing source data, and 2) to automate the manufacturer's methods of data development to enable the maintenance of a consistent INM database over time. These new automated tools were used to generate INM database submissions for six airplane types :737-700 (CFM56-7 24K), 767-400ER (CF6-80C2BF), 777-300 (Trent 892), 717-200 (BR7 15), 757-300 (RR535E4B), and the 737-800 (CFM56-7 26K).

  19. Multiple Imputation For Combined-Survey Estimation With Incomplete Regressors In One But Not Both Surveys

    PubMed Central

    Rendall, Michael S.; Ghosh-Dastidar, Bonnie; Weden, Margaret M.; Baker, Elizabeth H.; Nazarov, Zafar

    2013-01-01

    Within-survey multiple imputation (MI) methods are adapted to pooled-survey regression estimation where one survey has more regressors, but typically fewer observations, than the other. This adaptation is achieved through: (1) larger numbers of imputations to compensate for the higher fraction of missing values; (2) model-fit statistics to check the assumption that the two surveys sample from a common universe; and (3) specificying the analysis model completely from variables present in the survey with the larger set of regressors, thereby excluding variables never jointly observed. In contrast to the typical within-survey MI context, cross-survey missingness is monotonic and easily satisfies the Missing At Random (MAR) assumption needed for unbiased MI. Large efficiency gains and substantial reduction in omitted variable bias are demonstrated in an application to sociodemographic differences in the risk of child obesity estimated from two nationally-representative cohort surveys. PMID:24223447

  20. On the galaxy–halo connection in the EAGLE simulation

    DOE PAGES

    Desmond, Harry; Mao, Yao -Yuan; Wechsler, Risa H.; ...

    2017-06-13

    Empirical models of galaxy formation require assumptions about the correlations between galaxy and halo properties. These may be calibrated against observations or inferred from physical models such as hydrodynamical simulations. In this Letter, we use the EAGLE simulation to investigate the correlation of galaxy size with halo properties. We motivate this analysis by noting that the common assumption of angular momentum partition between baryons and dark matter in rotationally supported galaxies overpredicts both the spread in the stellar mass–size relation and the anticorrelation of size and velocity residuals, indicating a problem with the galaxy–halo connection it implies. We find themore » EAGLE galaxy population to perform significantly better on both statistics, and trace this success to the weakness of the correlations of galaxy size with halo mass, concentration and spin at fixed stellar mass. Here by, using these correlations in empirical models will enable fine-grained aspects of galaxy scalings to be matched.« less

  1. Strategy evolution driven by switching probabilities in structured multi-agent systems

    NASA Astrophysics Data System (ADS)

    Zhang, Jianlei; Chen, Zengqiang; Li, Zhiqi

    2017-10-01

    Evolutionary mechanism driving the commonly seen cooperation among unrelated individuals is puzzling. Related models for evolutionary games on graphs traditionally assume that players imitate their successful neighbours with higher benefits. Notably, an implicit assumption here is that players are always able to acquire the required pay-off information. To relax this restrictive assumption, a contact-based model has been proposed, where switching probabilities between strategies drive the strategy evolution. However, the explicit and quantified relation between a player's switching probability for her strategies and the number of her neighbours remains unknown. This is especially a key point in heterogeneously structured system, where players may differ in the numbers of their neighbours. Focusing on this, here we present an augmented model by introducing an attenuation coefficient and evaluate its influence on the evolution dynamics. Results show that the individual influence on others is negatively correlated with the contact numbers specified by the network topologies. Results further provide the conditions under which the coexisting strategies can be calculated analytically.

  2. On the galaxy–halo connection in the EAGLE simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desmond, Harry; Mao, Yao -Yuan; Wechsler, Risa H.

    Empirical models of galaxy formation require assumptions about the correlations between galaxy and halo properties. These may be calibrated against observations or inferred from physical models such as hydrodynamical simulations. In this Letter, we use the EAGLE simulation to investigate the correlation of galaxy size with halo properties. We motivate this analysis by noting that the common assumption of angular momentum partition between baryons and dark matter in rotationally supported galaxies overpredicts both the spread in the stellar mass–size relation and the anticorrelation of size and velocity residuals, indicating a problem with the galaxy–halo connection it implies. We find themore » EAGLE galaxy population to perform significantly better on both statistics, and trace this success to the weakness of the correlations of galaxy size with halo mass, concentration and spin at fixed stellar mass. Here by, using these correlations in empirical models will enable fine-grained aspects of galaxy scalings to be matched.« less

  3. Modeling aerodynamic discontinuities and the onset of chaos in flight dynamical systems

    NASA Technical Reports Server (NTRS)

    Tobak, M.; Chapman, G. T.; Uenal, A.

    1986-01-01

    Various representations of the aerodynamic contribution to the aircraft's equation of motion are shown to be compatible within the common assumption of their Frechet differentiability. Three forms of invalidating Frechet differentiality are identified, and the mathematical model is amended to accommodate their occurrence. Some of the ways in which chaotic behavior may emerge are discussed, first at the level of the aerodynamic contribution to the equation of motion, and then at the level of the equations of motion themselves.

  4. Modeling aerodynamic discontinuities and onset of chaos in flight dynamical systems

    NASA Technical Reports Server (NTRS)

    Tobak, M.; Chapman, G. T.; Unal, A.

    1987-01-01

    Various representations of the aerodynamic contribution to the aircraft's equation of motion are shown to be compatible within the common assumption of their Frechet differentiability. Three forms of invalidating Frechet differentiability are identified, and the mathematical model is amended to accommodate their occurrence. Some of the ways in which chaotic behavior may emerge are discussed, first at the level of the aerodynamic contribution to the equations of motion, and then at the level of the equations of motion themselves.

  5. Longitudinal data analyses using linear mixed models in SPSS: concepts, procedures and illustrations.

    PubMed

    Shek, Daniel T L; Ma, Cecilia M S

    2011-01-05

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.

  6. Longitudinal Data Analyses Using Linear Mixed Models in SPSS: Concepts, Procedures and Illustrations

    PubMed Central

    Shek, Daniel T. L.; Ma, Cecilia M. S.

    2011-01-01

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented. PMID:21218263

  7. Probabilistic description of probable maximum precipitation

    NASA Astrophysics Data System (ADS)

    Ben Alaya, Mohamed Ali; Zwiers, Francis W.; Zhang, Xuebin

    2017-04-01

    Probable Maximum Precipitation (PMP) is the key parameter used to estimate probable Maximum Flood (PMF). PMP and PMF are important for dam safety and civil engineering purposes. Even if the current knowledge of storm mechanisms remains insufficient to properly evaluate limiting values of extreme precipitation, PMP estimation methods are still based on deterministic consideration, and give only single values. This study aims to provide a probabilistic description of the PMP based on the commonly used method, the so-called moisture maximization. To this end, a probabilistic bivariate extreme values model is proposed to address the limitations of traditional PMP estimates via moisture maximization namely: (i) the inability to evaluate uncertainty and to provide a range PMP values, (ii) the interpretation that a maximum of a data series as a physical upper limit (iii) and the assumption that a PMP event has maximum moisture availability. Results from simulation outputs of the Canadian Regional Climate Model CanRCM4 over North America reveal the high uncertainties inherent in PMP estimates and the non-validity of the assumption that PMP events have maximum moisture availability. This later assumption leads to overestimation of the PMP by an average of about 15% over North America, which may have serious implications for engineering design.

  8. Parameterization of planetary wave breaking in the middle atmosphere

    NASA Technical Reports Server (NTRS)

    Garcia, Rolando R.

    1991-01-01

    A parameterization of planetary wave breaking in the middle atmosphere has been developed and tested in a numerical model which includes governing equations for a single wave and the zonal-mean state. The parameterization is based on the assumption that wave breaking represents a steady-state equilibrium between the flux of wave activity and its dissipation by nonlinear processes, and that the latter can be represented as linear damping of the primary wave. With this and the additional assumption that the effect of breaking is to prevent further amplitude growth, the required dissipation rate is readily obtained from the steady-state equation for wave activity; diffusivity coefficients then follow from the dissipation rate. The assumptions made in the derivation are equivalent to those commonly used in parameterizations for gravity wave breaking, but the formulation in terms of wave activity helps highlight the central role of the wave group velocity in determining the dissipation rate. Comparison of model results with nonlinear calculations of wave breaking and with diagnostic determinations of stratospheric diffusion coefficients reveals remarkably good agreement, and suggests that the parameterization could be useful for simulating inexpensively, but realistically, the effects of planetary wave transport.

  9. Normality of raw data in general linear models: The most widespread myth in statistics

    USGS Publications Warehouse

    Kery, Marc; Hatfield, Jeff S.

    2003-01-01

    In years of statistical consulting for ecologists and wildlife biologists, by far the most common misconception we have come across has been the one about normality in general linear models. These comprise a very large part of the statistical models used in ecology and include t tests, simple and multiple linear regression, polynomial regression, and analysis of variance (ANOVA) and covariance (ANCOVA). There is a widely held belief that the normality assumption pertains to the raw data rather than to the model residuals. We suspect that this error may also occur in countless published studies, whenever the normality assumption is tested prior to analysis. This may lead to the use of nonparametric alternatives (if there are any), when parametric tests would indeed be appropriate, or to use of transformations of raw data, which may introduce hidden assumptions such as multiplicative effects on the natural scale in the case of log-transformed data. Our aim here is to dispel this myth. We very briefly describe relevant theory for two cases of general linear models to show that the residuals need to be normally distributed if tests requiring normality are to be used, such as t and F tests. We then give two examples demonstrating that the distribution of the response variable may be nonnormal, and yet the residuals are well behaved. We do not go into the issue of how to test normality; instead we display the distributions of response variables and residuals graphically.

  10. The EDGE-CALIFA survey: validating stellar dynamical mass models with CO kinematics

    NASA Astrophysics Data System (ADS)

    Leung, Gigi Y. C.; Leaman, Ryan; van de Ven, Glenn; Lyubenova, Mariya; Zhu, Ling; Bolatto, Alberto D.; Falcón-Barroso, Jesus; Blitz, Leo; Dannerbauer, Helmut; Fisher, David B.; Levy, Rebecca C.; Sanchez, Sebastian F.; Utomo, Dyas; Vogel, Stuart; Wong, Tony; Ziegler, Bodo

    2018-06-01

    Deriving circular velocities of galaxies from stellar kinematics can provide an estimate of their total dynamical mass, provided a contribution from the velocity dispersion of the stars is taken into account. Molecular gas (e.g. CO), on the other hand, is a dynamically cold tracer and hence acts as an independent circular velocity estimate without needing such a correction. In this paper, we test the underlying assumptions of three commonly used dynamical models, deriving circular velocities from stellar kinematics of 54 galaxies (S0-Sd) that have observations of both stellar kinematics from the Calar Alto Legacy Integral Field Area (CALIFA) survey, and CO kinematics from the Extragalactic Database for Galaxy Evolution (EDGE) survey. We test the asymmetric drift correction (ADC) method, as well as Jeans, and Schwarzschild models. The three methods each reproduce the CO circular velocity at 1Re to within 10 per cent. All three methods show larger scatter (up to 20 per cent) in the inner regions (R < 0.4Re) that may be due to an increasingly spherical mass distribution (which is not captured by the thin disc assumption in ADC), or non-constant stellar M/L ratios (for both the JAM and Schwarzschild models). This homogeneous analysis of stellar and gaseous kinematics validates that all three models can recover Mdyn at 1Re to better than 20 per cent, but users should be mindful of scatter in the inner regions where some assumptions may break down.

  11. Useful global-change scenarios: current issues and challenges

    NASA Astrophysics Data System (ADS)

    Parson, E. A.

    2008-10-01

    Scenarios are increasingly used to inform global-change debates, but their connection to decisions has been weak and indirect. This reflects the greater number and variety of potential users and scenario needs, relative to other decision domains where scenario use is more established. Global-change scenario needs include common elements, e.g., model-generated projections of emissions and climate change, needed by many users but in different ways and with different assumptions. For these common elements, the limited ability to engage diverse global-change users in scenario development requires extreme transparency in communicating underlying reasoning and assumptions, including probability judgments. Other scenario needs are specific to users, requiring a decentralized network of scenario and assessment organizations to disseminate and interpret common elements and add elements requiring local context or expertise. Such an approach will make global-change scenarios more useful for decisions, but not less controversial. Despite predictable attacks, scenario-based reasoning is necessary for responsible global-change decisions because decision-relevant uncertainties cannot be specified scientifically. The purpose of scenarios is not to avoid speculation, but to make the required speculation more disciplined, more anchored in relevant scientific knowledge when available, and more transparent.

  12. Reproductive control via eviction (but not the threat of eviction) in banded mongooses

    PubMed Central

    Cant, Michael A.; Hodge, Sarah J.; Bell, Matthew B. V.; Gilchrist, Jason S.; Nichols, Hazel J.

    2010-01-01

    Considerable research has focused on understanding variation in reproductive skew in cooperative animal societies, but the pace of theoretical development has far outstripped empirical testing of the models. One major class of model suggests that dominant individuals can use the threat of eviction to deter subordinate reproduction (the ‘restraint’ model), but this idea remains untested. Here, we use long-term behavioural and genetic data to test the assumptions of the restraint model in banded mongooses (Mungos mungo), a species in which subordinates breed regularly and evictions are common. We found that dominant females suffer reproductive costs when subordinates breed, and respond to these costs by evicting breeding subordinates from the group en masse, in agreement with the assumptions of the model. We found no evidence, however, that subordinate females exercise reproductive restraint to avoid being evicted in the first place. This means that the pattern of reproduction is not the result of a reproductive ‘transaction’ to avert the threat of eviction. We present a simple game theoretical analysis that suggests that eviction threats may often be ineffective to induce pre-emptive restraint among multiple subordinates and predicts that threats of eviction (or departure) will be much more effective in dyadic relationships and linear hierarchies. Transactional models may be more applicable to these systems. Greater focus on testing the assumptions rather than predictions of skew models can lead to a better understanding of how animals control each other's reproduction, and the extent to which behaviour is shaped by overt acts versus hidden threats. PMID:20236979

  13. The Use of Growth Mixture Modeling for Studying Resilience to Major Life Stressors in Adulthood and Old Age: Lessons for Class Size and Identification and Model Selection.

    PubMed

    Infurna, Frank J; Grimm, Kevin J

    2017-12-15

    Growth mixture modeling (GMM) combines latent growth curve and mixture modeling approaches and is typically used to identify discrete trajectories following major life stressors (MLS). However, GMM is often applied to data that does not meet the statistical assumptions of the model (e.g., within-class normality) and researchers often do not test additional model constraints (e.g., homogeneity of variance across classes), which can lead to incorrect conclusions regarding the number and nature of the trajectories. We evaluate how these methodological assumptions influence trajectory size and identification in the study of resilience to MLS. We use data on changes in subjective well-being and depressive symptoms following spousal loss from the HILDA and HRS. Findings drastically differ when constraining the variances to be homogenous versus heterogeneous across trajectories, with overextraction being more common when constraining the variances to be homogeneous across trajectories. In instances, when the data are non-normally distributed, assuming normally distributed data increases the extraction of latent classes. Our findings showcase that the assumptions typically underlying GMM are not tenable, influencing trajectory size and identification and most importantly, misinforming conceptual models of resilience. The discussion focuses on how GMM can be leveraged to effectively examine trajectories of adaptation following MLS and avenues for future research. © The Author 2017. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  14. Is It Just a Bad Class? Assessing the Stability of Measured Teacher Performance. CEDR Working Paper No. 2010-3.0

    ERIC Educational Resources Information Center

    Goldhaber, Dan; Hansen, Michael

    2010-01-01

    Economic theory commonly models unobserved worker quality as a given parameter that is fixed over time, but empirical evidence supporting this assumption is sparse. In this paper we report on work estimating the stability of value-added estimates of teacher effects, an important area of investigation given that new workforce policies implicitly…

  15. Importance and pitfalls of molecular analysis to parasite epidemiology.

    PubMed

    Constantine, Clare C

    2003-08-01

    Molecular tools are increasingly being used to address questions about parasite epidemiology. Parasites represent a diverse group and they might not fit traditional population genetic models. Testing hypotheses depends equally on correct sampling, appropriate tool and/or marker choice, appropriate analysis and careful interpretation. All methods of analysis make assumptions which, if violated, make the results invalid. Some guidelines to avoid common pitfalls are offered here.

  16. Particle Filtering Methods for Incorporating Intelligence Updates

    DTIC Science & Technology

    2017-03-01

    methodology for incorporating intelligence updates into a stochastic model for target tracking. Due to the non -parametric assumptions of the PF...samples are taken with replacement from the remaining non -zero weighted particles at each iteration. With this methodology , a zero-weighted particle is...incorporation of information updates. A common method for incorporating information updates is Kalman filtering. However, given the probable nonlinear and non

  17. Effect of initial conditions and of intra-event rainfall intensity variability on shallow landslide triggering return period

    NASA Astrophysics Data System (ADS)

    Peres, David Johnny; Cancelliere, Antonino

    2016-04-01

    Assessment of shallow landslide hazard is important for appropriate planning of mitigation measures. Generally, return period of slope instability is assumed as a quantitative metric to map landslide triggering hazard on a catchment. The most commonly applied approach to estimate such return period consists in coupling a physically-based landslide triggering model (hydrological and slope stability) with rainfall intensity-duration-frequency (IDF) curves. Among the drawbacks of such an approach, the following assumptions may be mentioned: (1) prefixed initial conditions, with no regard to their probability of occurrence, and (2) constant intensity-hyetographs. In our work we propose the use of a Monte Carlo simulation approach in order to investigate the effects of the two above mentioned assumptions. The approach is based on coupling a physically based hydrological and slope stability model with a stochastic rainfall time series generator. By this methodology a long series of synthetic rainfall data can be generated and given as input to a landslide triggering physically based model, in order to compute the return period of landslide triggering as the mean inter-arrival time of a factor of safety less than one. In particular, we couple the Neyman-Scott rectangular pulses model for hourly rainfall generation and the TRIGRS v.2 unsaturated model for the computation of transient response to individual rainfall events. Initial conditions are computed by a water table recession model that links initial conditions at a given event to the final response at the preceding event, thus taking into account variable inter-arrival time between storms. One-thousand years of synthetic hourly rainfall are generated to estimate return periods up to 100 years. Applications are first carried out to map landslide triggering hazard in the Loco catchment, located in highly landslide-prone area of the Peloritani Mountains, Sicily, Italy. Then a set of additional simulations are performed in order to compare the results obtained by the traditional IDF-based method with the Monte Carlo ones. Results indicate that both variability of initial conditions and of intra-event rainfall intensity significantly affect return period estimation. In particular, the common assumption of an initial water table depth at the base of the pervious strata may lead in practice to an overestimation of return period up to one order of magnitude, while the assumption of constant-intensity hyetographs may yield an overestimation by a factor of two or three. Hence, it may be concluded that the analysed simplifications involved in the traditional IDF-based approach generally imply a non-conservative assessment of landslide triggering hazard.

  18. A CCA+ICA based model for multi-task brain imaging data fusion and its application to schizophrenia.

    PubMed

    Sui, Jing; Adali, Tülay; Pearlson, Godfrey; Yang, Honghui; Sponheim, Scott R; White, Tonya; Calhoun, Vince D

    2010-05-15

    Collection of multiple-task brain imaging data from the same subject has now become common practice in medical imaging studies. In this paper, we propose a simple yet effective model, "CCA+ICA", as a powerful tool for multi-task data fusion. This joint blind source separation (BSS) model takes advantage of two multivariate methods: canonical correlation analysis and independent component analysis, to achieve both high estimation accuracy and to provide the correct connection between two datasets in which sources can have either common or distinct between-dataset correlation. In both simulated and real fMRI applications, we compare the proposed scheme with other joint BSS models and examine the different modeling assumptions. The contrast images of two tasks: sensorimotor (SM) and Sternberg working memory (SB), derived from a general linear model (GLM), were chosen to contribute real multi-task fMRI data, both of which were collected from 50 schizophrenia patients and 50 healthy controls. When examining the relationship with duration of illness, CCA+ICA revealed a significant negative correlation with temporal lobe activation. Furthermore, CCA+ICA located sensorimotor cortex as the group-discriminative regions for both tasks and identified the superior temporal gyrus in SM and prefrontal cortex in SB as task-specific group-discriminative brain networks. In summary, we compared the new approach to some competitive methods with different assumptions, and found consistent results regarding each of their hypotheses on connecting the two tasks. Such an approach fills a gap in existing multivariate methods for identifying biomarkers from brain imaging data.

  19. Differential equations governing slip-induced pore-pressure fluctuations in a water-saturated granular medium

    USGS Publications Warehouse

    Iverson, R.M.

    1993-01-01

    Macroscopic frictional slip in water-saturated granular media occurs commonly during landsliding, surface faulting, and intense bedload transport. A mathematical model of dynamic pore-pressure fluctuations that accompany and influence such sliding is derived here by both inductive and deductive methods. The inductive derivation shows how the governing differential equations represent the physics of the steadily sliding array of cylindrical fiberglass rods investigated experimentally by Iverson and LaHusen (1989). The deductive derivation shows how the same equations result from a novel application of Biot's (1956) dynamic mixture theory to macroscopic deformation. The model consists of two linear differential equations and five initial and boundary conditions that govern solid displacements and pore-water pressures. Solid displacements and water pressures are strongly coupled, in part through a boundary condition that ensures mass conservation during irreversible pore deformation that occurs along the bumpy slip surface. Feedback between this deformation and the pore-pressure field may yield complex system responses. The dual derivations of the model help explicate key assumptions. For example, the model requires that the dimensionless parameter B, defined here through normalization of Biot's equations, is much larger than one. This indicates that solid-fluid coupling forces are dominated by viscous rather than inertial effects. A tabulation of physical and kinematic variables for the rod-array experiments of Iverson and LaHusen and for various geologic phenomena shows that the model assumptions commonly are satisfied. A subsequent paper will describe model tests against experimental data. ?? 1993 International Association for Mathematical Geology.

  20. Rumor spreading model with the different attitudes towards rumors

    NASA Astrophysics Data System (ADS)

    Hu, Yuhan; Pan, Qiuhui; Hou, Wenbing; He, Mingfeng

    2018-07-01

    Rumor spreading has a profound influence on people's well-being and social stability. There are many factors influencing rumor spreading. In this paper, we recommended an assumption that among the common mass there are three attitudes towards rumors: to like rumor spreading, to dislike rumor spreading, and to be hesitant (or neutral) to rumor spreading. Based on such an assumption, a Susceptible-Hesitating-Affected-Resistant(SHAR) model is established, which considered individuals' different attitudes towards rumor spreading. We also analyzed the local and global stability of rumor-free equilibrium and rumor-existence equilibrium, calculated the basic reproduction number of our model. With numerical simulations, we illustrated the effect of parameter changes on rumor spreading, analyzing the parameter sensitivity of the model. The results of the theoretical analysis and numerical simulations illustrated the conclusions of this study. People having different attitudes towards rumors may play different roles in the process of rumor spreading. It was surprising to find, in our research, that people who hesitate to spread rumors have a positive effect on the spread of rumors.

  1. 39 Questionable Assumptions in Modern Physics

    NASA Astrophysics Data System (ADS)

    Volk, Greg

    2009-03-01

    The growing body of anomalies in new energy, low energy nuclear reactions, astrophysics, atomic physics, and entanglement, combined with the failure of the Standard Model and string theory to predict many of the most basic fundamental phenomena, all point to a need for major new paradigms. Not Band-Aids, but revolutionary new ways of conceptualizing physics, in the spirit of Thomas Kuhn's The Structure of Scientific Revolutions. This paper identifies a number of long-held, but unproven assumptions currently being challenged by an increasing number of alternative scientists. Two common themes, both with venerable histories, keep recurring in the many alternative theories being proposed: (1) Mach's Principle, and (2) toroidal, vortex particles. Matter-based Mach's Principle differs from both space-based universal frames and observer-based Einsteinian relativity. Toroidal particles, in addition to explaining electron spin and the fundamental constants, satisfy the basic requirement of Gauss's misunderstood B Law, that motion itself circulates. Though a comprehensive theory is beyond the scope of this paper, it will suggest alternatives to the long list of assumptions in context.

  2. Bayesian Fundamentalism or Enlightenment? On the explanatory status and theoretical contributions of Bayesian models of cognition.

    PubMed

    Jones, Matt; Love, Bradley C

    2011-08-01

    The prominence of Bayesian modeling of cognition has increased recently largely because of mathematical advances in specifying and deriving predictions from complex probabilistic models. Much of this research aims to demonstrate that cognitive behavior can be explained from rational principles alone, without recourse to psychological or neurological processes and representations. We note commonalities between this rational approach and other movements in psychology - namely, Behaviorism and evolutionary psychology - that set aside mechanistic explanations or make use of optimality assumptions. Through these comparisons, we identify a number of challenges that limit the rational program's potential contribution to psychological theory. Specifically, rational Bayesian models are significantly unconstrained, both because they are uninformed by a wide range of process-level data and because their assumptions about the environment are generally not grounded in empirical measurement. The psychological implications of most Bayesian models are also unclear. Bayesian inference itself is conceptually trivial, but strong assumptions are often embedded in the hypothesis sets and the approximation algorithms used to derive model predictions, without a clear delineation between psychological commitments and implementational details. Comparing multiple Bayesian models of the same task is rare, as is the realization that many Bayesian models recapitulate existing (mechanistic level) theories. Despite the expressive power of current Bayesian models, we argue they must be developed in conjunction with mechanistic considerations to offer substantive explanations of cognition. We lay out several means for such an integration, which take into account the representations on which Bayesian inference operates, as well as the algorithms and heuristics that carry it out. We argue this unification will better facilitate lasting contributions to psychological theory, avoiding the pitfalls that have plagued previous theoretical movements.

  3. Large-eddy simulations with wall models

    NASA Technical Reports Server (NTRS)

    Cabot, W.

    1995-01-01

    The near-wall viscous and buffer regions of wall-bounded flows generally require a large expenditure of computational resources to be resolved adequately, even in large-eddy simulation (LES). Often as much as 50% of the grid points in a computational domain are devoted to these regions. The dense grids that this implies also generally require small time steps for numerical stability and/or accuracy. It is commonly assumed that the inner wall layers are near equilibrium, so that the standard logarithmic law can be applied as the boundary condition for the wall stress well away from the wall, for example, in the logarithmic region, obviating the need to expend large amounts of grid points and computational time in this region. This approach is commonly employed in LES of planetary boundary layers, and it has also been used for some simple engineering flows. In order to calculate accurately a wall-bounded flow with coarse wall resolution, one requires the wall stress as a boundary condition. The goal of this work is to determine the extent to which equilibrium and boundary layer assumptions are valid in the near-wall regions, to develop models for the inner layer based on such assumptions, and to test these modeling ideas in some relatively simple flows with different pressure gradients, such as channel flow and flow over a backward-facing step. Ultimately, models that perform adequately in these situations will be applied to more complex flow configurations, such as an airfoil.

  4. Password-Only Authenticated Three-Party Key Exchange with Provable Security in the Standard Model

    PubMed Central

    Nam, Junghyun; Kim, Junghwan; Kang, Hyun-Kyu; Kim, Jinsoo; Paik, Juryon

    2014-01-01

    Protocols for password-only authenticated key exchange (PAKE) in the three-party setting allow two clients registered with the same authentication server to derive a common secret key from their individual password shared with the server. Existing three-party PAKE protocols were proven secure under the assumption of the existence of random oracles or in a model that does not consider insider attacks. Therefore, these protocols may turn out to be insecure when the random oracle is instantiated with a particular hash function or an insider attack is mounted against the partner client. The contribution of this paper is to present the first three-party PAKE protocol whose security is proven without any idealized assumptions in a model that captures insider attacks. The proof model we use is a variant of the indistinguishability-based model of Bellare, Pointcheval, and Rogaway (2000), which is one of the most widely accepted models for security analysis of password-based key exchange protocols. We demonstrated that our protocol achieves not only the typical indistinguishability-based security of session keys but also the password security against undetectable online dictionary attacks. PMID:24977229

  5. Limitations of one-dimensional mesoscale PBL parameterizations in reproducing mountain-wave flows

    DOE PAGES

    Munoz-Esparza, Domingo; Sauer, Jeremy A.; Linn, Rodman R.; ...

    2015-12-08

    In this study, mesoscale models are considered to be the state of the art in modeling mountain-wave flows. Herein, we investigate the role and accuracy of planetary boundary layer (PBL) parameterizations in handling the interaction between large-scale mountain waves and the atmospheric boundary layer. To that end, we use recent large-eddy simulation (LES) results of mountain waves over a symmetric two-dimensional bell-shaped hill [Sauer et al., J. Atmos. Sci. (2015)], and compare them to four commonly used PBL schemes. We find that one-dimensional PBL parameterizations produce reasonable agreement with the LES results in terms of vertical wavelength, amplitude of velocitymore » and turbulent kinetic energy distribution in the downhill shooting flow region. However, the assumption of horizontal homogeneity in PBL parameterizations does not hold in the context of these complex flow configurations. This inappropriate modeling assumption results in a vertical wavelength shift producing errors of ≈ 10 m s–1 at downstream locations due to the presence of a coherent trapped lee wave that does not mix with the atmospheric boundary layer. In contrast, horizontally-integrated momentum flux derived from these PBL schemes displays a realistic pattern. Therefore results from mesoscale models using ensembles of one-dimensional PBL schemes can still potentially be used to parameterize drag effects in general circulation models. Nonetheless, three-dimensional PBL schemes must be developed in order for mesoscale models to accurately represent complex-terrain and other types of flows where one-dimensional PBL assumptions are violated.« less

  6. Quality Reporting of Multivariable Regression Models in Observational Studies: Review of a Representative Sample of Articles Published in Biomedical Journals.

    PubMed

    Real, Jordi; Forné, Carles; Roso-Llorach, Albert; Martínez-Sánchez, Jose M

    2016-05-01

    Controlling for confounders is a crucial step in analytical observational studies, and multivariable models are widely used as statistical adjustment techniques. However, the validation of the assumptions of the multivariable regression models (MRMs) should be made clear in scientific reporting. The objective of this study is to review the quality of statistical reporting of the most commonly used MRMs (logistic, linear, and Cox regression) that were applied in analytical observational studies published between 2003 and 2014 by journals indexed in MEDLINE.Review of a representative sample of articles indexed in MEDLINE (n = 428) with observational design and use of MRMs (logistic, linear, and Cox regression). We assessed the quality of reporting about: model assumptions and goodness-of-fit, interactions, sensitivity analysis, crude and adjusted effect estimate, and specification of more than 1 adjusted model.The tests of underlying assumptions or goodness-of-fit of the MRMs used were described in 26.2% (95% CI: 22.0-30.3) of the articles and 18.5% (95% CI: 14.8-22.1) reported the interaction analysis. Reporting of all items assessed was higher in articles published in journals with a higher impact factor.A low percentage of articles indexed in MEDLINE that used multivariable techniques provided information demonstrating rigorous application of the model selected as an adjustment method. Given the importance of these methods to the final results and conclusions of observational studies, greater rigor is required in reporting the use of MRMs in the scientific literature.

  7. A speciation solver for cement paste modeling and the semismooth Newton method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Georget, Fabien, E-mail: fabieng@princeton.edu; Prévost, Jean H., E-mail: prevost@princeton.edu; Vanderbei, Robert J., E-mail: rvdb@princeton.edu

    2015-02-15

    The mineral assemblage of a cement paste may vary considerably with its environment. In addition, the water content of a cement paste is relatively low and the ionic strength of the interstitial solution is often high. These conditions are extreme conditions with respect to the common assumptions made in speciation problem. Furthermore the common trial and error algorithm to find the phase assemblage does not provide any guarantee of convergence. We propose a speciation solver based on a semismooth Newton method adapted to the thermodynamic modeling of cement paste. The strong theoretical properties associated with these methods offer practical advantages.more » Results of numerical experiments indicate that the algorithm is reliable, robust, and efficient.« less

  8. Quantum Common Causes and Quantum Causal Models

    NASA Astrophysics Data System (ADS)

    Allen, John-Mark A.; Barrett, Jonathan; Horsman, Dominic C.; Lee, Ciarán M.; Spekkens, Robert W.

    2017-07-01

    Reichenbach's principle asserts that if two observed variables are found to be correlated, then there should be a causal explanation of these correlations. Furthermore, if the explanation is in terms of a common cause, then the conditional probability distribution over the variables given the complete common cause should factorize. The principle is generalized by the formalism of causal models, in which the causal relationships among variables constrain the form of their joint probability distribution. In the quantum case, however, the observed correlations in Bell experiments cannot be explained in the manner Reichenbach's principle would seem to demand. Motivated by this, we introduce a quantum counterpart to the principle. We demonstrate that under the assumption that quantum dynamics is fundamentally unitary, if a quantum channel with input A and outputs B and C is compatible with A being a complete common cause of B and C , then it must factorize in a particular way. Finally, we show how to generalize our quantum version of Reichenbach's principle to a formalism for quantum causal models and provide examples of how the formalism works.

  9. Improving land surface emissivty parameter for land surface models using portable FTIR and remote sensing observation in Taklimakan Desert

    NASA Astrophysics Data System (ADS)

    Liu, Yongqiang; Mamtimin, Ali; He, Qing

    2014-05-01

    Because land surface emissivity (ɛ) has not been reliably measured, global climate model (GCM) land surface schemes conventionally set this parameter as simply assumption, for example, 1 as in the National Oceanic and Atmospheric Administration (NOAA) National Centers for Environmental Prediction (NCEP) model, 0.96 for soil and wetland in the Global and Regional Assimilation and Prediction System (GRAPES) Common Land Model (CoLM). This is the so-called emissivity assumption. Accurate broadband emissivity data are needed as model inputs to better simulate the land surface climate. It is demonstrated in this paper that the assumption of the emissivity induces errors in modeling the surface energy budget over Taklimakan Desert where ɛ is far smaller than original value. One feasible solution to this problem is to apply the accurate broadband emissivity into land surface models. The Moderate Resolution Imaging Spectroradiometer (MODIS) instrument has routinely measured spectral emissivities in six thermal infrared bands. The empirical regression equations have been developed in this study to convert these spectral emissivities to broadband emissivity required by land surface models. In order to calibrate the regression equations, using a portable Fourier Transform infrared (FTIR) spectrometer instrument, crossing Taklimakan Desert along with highway from north to south, to measure the accurate broadband emissivity. The observed emissivity data show broadband ɛ around 0.89-0.92. To examine the impact of improved ɛ to radiative energy redistribution, simulation studies were conducted using offline CoLM. The results illustrate that large impacts of surface ɛ occur over desert, with changes up in surface skin temperature, as well as evident changes in sensible heat fluxes. Keywords: Taklimakan Desert, surface broadband emissivity, Fourier Transform infrared spectrometer, MODIS, CoLM

  10. Quid pro quo: a mechanism for fair collaboration in networked systems.

    PubMed

    Santos, Agustín; Fernández Anta, Antonio; López Fernández, Luis

    2013-01-01

    Collaboration may be understood as the execution of coordinated tasks (in the most general sense) by groups of users, who cooperate for achieving a common goal. Collaboration is a fundamental assumption and requirement for the correct operation of many communication systems. The main challenge when creating collaborative systems in a decentralized manner is dealing with the fact that users may behave in selfish ways, trying to obtain the benefits of the tasks but without participating in their execution. In this context, Game Theory has been instrumental to model collaborative systems and the task allocation problem, and to design mechanisms for optimal allocation of tasks. In this paper, we revise the classical assumptions of these models and propose a new approach to this problem. First, we establish a system model based on heterogenous nodes (users, players), and propose a basic distributed mechanism so that, when a new task appears, it is assigned to the most suitable node. The classical technique for compensating a node that executes a task is the use of payments (which in most networks are hard or impossible to implement). Instead, we propose a distributed mechanism for the optimal allocation of tasks without payments. We prove this mechanism to be robust evenevent in the presence of independent selfish or rationally limited players. Additionally, our model is based on very weak assumptions, which makes the proposed mechanisms susceptible to be implemented in networked systems (e.g., the Internet).

  11. Forks in the road: choices in procedures for designing wildland linkages.

    PubMed

    Beier, Paul; Majka, Daniel R; Spencer, Wayne D

    2008-08-01

    Models are commonly used to identify lands that will best maintain the ability of wildlife to move between wildland blocks through matrix lands after the remaining matrix has become incompatible with wildlife movement. We offer a roadmap of 16 choices and assumptions that arise in designing linkages to facilitate movement or gene flow of focal species between 2 or more predefined wildland blocks. We recommend designing linkages to serve multiple (rather than one) focal species likely to serve as a collective umbrella for all native species and ecological processes, explicitly acknowledging untested assumptions, and using uncertainty analysis to illustrate potential effects of model uncertainty. Such uncertainty is best displayed to stakeholders as maps of modeled linkages under different assumptions. We also recommend modeling corridor dwellers (species that require more than one generation to move their genes between wildland blocks) differently from passage species (for which an individual can move between wildland blocks within a few weeks). We identify a problem, which we call the subjective translation problem, that arises because the analyst must subjectively decide how to translate measurements of resource selection into resistance. This problem can be overcome by estimating resistance from observations of animal movement, genetic distances, or interpatch movements. There is room for substantial improvement in the procedures used to design linkages robust to climate change and in tools that allow stakeholders to compare an optimal linkage design to alternative designs that minimize costs or achieve other conservation goals.

  12. Confronting ethical permissibility in animal research: rejecting a common assumption and extending a principle of justice.

    PubMed

    Choe Smith, Chong Un

    2014-04-01

    A common assumption in the selection of nonhuman animal subjects for research and the approval of research is that, if the risks of a procedure are too great for humans, and if there is a so-called scientific necessity, then it is permissible to use nonhuman animal subjects. I reject the common assumption as neglecting the central ethical issue of the permissibility of using nonhuman animal subjects and as being inconsistent with the principle of justice used in human subjects research ethics. This principle requires that certain classes of individuals not be subjected to a disproportionate share of the burdens or risks of research. I argue for an extension of this principle to nonhuman animal research and show that a prima facie violation of the principle occurs because nonhuman animals bear an overwhelmingly disproportionate share of the risks of research without sufficient justification or reciprocal benefit.

  13. Essays in the Economics of Procurement,

    DTIC Science & Technology

    1993-01-01

    Their support, encouragement, and critical reviews of draft articles were invaluable. Other notable contributions over the course of the project were...the period-by-period capital asset pricing model ( CAPM ). The period-by-period CAPM is common in applied work but the assumptions that underlie it are...may in turn mean that the data are inconsistent with application of the period-by-period CAPM ; see Fama (1977). For discussion of other problems in

  14. Estimating the Global Prevalence of Inadequate Zinc Intake from National Food Balance Sheets: Effects of Methodological Assumptions

    PubMed Central

    Wessells, K. Ryan; Singh, Gitanjali M.; Brown, Kenneth H.

    2012-01-01

    Background The prevalence of inadequate zinc intake in a population can be estimated by comparing the zinc content of the food supply with the population’s theoretical requirement for zinc. However, assumptions regarding the nutrient composition of foods, zinc requirements, and zinc absorption may affect prevalence estimates. These analyses were conducted to: (1) evaluate the effect of varying methodological assumptions on country-specific estimates of the prevalence of dietary zinc inadequacy and (2) generate a model considered to provide the best estimates. Methodology and Principal Findings National food balance data were obtained from the Food and Agriculture Organization of the United Nations. Zinc and phytate contents of these foods were estimated from three nutrient composition databases. Zinc absorption was predicted using a mathematical model (Miller equation). Theoretical mean daily per capita physiological and dietary requirements for zinc were calculated using recommendations from the Food and Nutrition Board of the Institute of Medicine and the International Zinc Nutrition Consultative Group. The estimated global prevalence of inadequate zinc intake varied between 12–66%, depending on which methodological assumptions were applied. However, country-specific rank order of the estimated prevalence of inadequate intake was conserved across all models (r = 0.57–0.99, P<0.01). A “best-estimate” model, comprised of zinc and phytate data from a composite nutrient database and IZiNCG physiological requirements for absorbed zinc, estimated the global prevalence of inadequate zinc intake to be 17.3%. Conclusions and Significance Given the multiple sources of uncertainty in this method, caution must be taken in the interpretation of the estimated prevalence figures. However, the results of all models indicate that inadequate zinc intake may be fairly common globally. Inferences regarding the relative likelihood of zinc deficiency as a public health problem in different countries can be drawn based on the country-specific rank order of estimated prevalence of inadequate zinc intake. PMID:23209781

  15. Mapping of quantitative trait loci using the skew-normal distribution.

    PubMed

    Fernandes, Elisabete; Pacheco, António; Penha-Gonçalves, Carlos

    2007-11-01

    In standard interval mapping (IM) of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. When this assumption of normality is violated, the most commonly adopted strategy is to use the previous model after data transformation. However, an appropriate transformation may not exist or may be difficult to find. Also this approach can raise interpretation issues. An interesting alternative is to consider a skew-normal mixture model in standard IM, and the resulting method is here denoted as skew-normal IM. This flexible model that includes the usual symmetric normal distribution as a special case is important, allowing continuous variation from normality to non-normality. In this paper we briefly introduce the main peculiarities of the skew-normal distribution. The maximum likelihood estimates of parameters of the skew-normal distribution are obtained by the expectation-maximization (EM) algorithm. The proposed model is illustrated with real data from an intercross experiment that shows a significant departure from the normality assumption. The performance of the skew-normal IM is assessed via stochastic simulation. The results indicate that the skew-normal IM has higher power for QTL detection and better precision of QTL location as compared to standard IM and nonparametric IM.

  16. Fitting N-mixture models to count data with unmodeled heterogeneity: Bias, diagnostics, and alternative approaches

    USGS Publications Warehouse

    Duarte, Adam; Adams, Michael J.; Peterson, James T.

    2018-01-01

    Monitoring animal populations is central to wildlife and fisheries management, and the use of N-mixture models toward these efforts has markedly increased in recent years. Nevertheless, relatively little work has evaluated estimator performance when basic assumptions are violated. Moreover, diagnostics to identify when bias in parameter estimates from N-mixture models is likely is largely unexplored. We simulated count data sets using 837 combinations of detection probability, number of sample units, number of survey occasions, and type and extent of heterogeneity in abundance or detectability. We fit Poisson N-mixture models to these data, quantified the bias associated with each combination, and evaluated if the parametric bootstrap goodness-of-fit (GOF) test can be used to indicate bias in parameter estimates. We also explored if assumption violations can be diagnosed prior to fitting N-mixture models. In doing so, we propose a new model diagnostic, which we term the quasi-coefficient of variation (QCV). N-mixture models performed well when assumptions were met and detection probabilities were moderate (i.e., ≥0.3), and the performance of the estimator improved with increasing survey occasions and sample units. However, the magnitude of bias in estimated mean abundance with even slight amounts of unmodeled heterogeneity was substantial. The parametric bootstrap GOF test did not perform well as a diagnostic for bias in parameter estimates when detectability and sample sizes were low. The results indicate the QCV is useful to diagnose potential bias and that potential bias associated with unidirectional trends in abundance or detectability can be diagnosed using Poisson regression. This study represents the most thorough assessment to date of assumption violations and diagnostics when fitting N-mixture models using the most commonly implemented error distribution. Unbiased estimates of population state variables are needed to properly inform management decision making. Therefore, we also discuss alternative approaches to yield unbiased estimates of population state variables using similar data types, and we stress that there is no substitute for an effective sample design that is grounded upon well-defined management objectives.

  17. Simulation Study Using a New Type of Sample Variance

    NASA Technical Reports Server (NTRS)

    Howe, D. A.; Lainson, K. J.

    1996-01-01

    We evaluate with simulated data a new type of sample variance for the characterization of frequency stability. The new statistic (referred to as TOTALVAR and its square root TOTALDEV) is a better predictor of long-term frequency variations than the present sample Allan deviation. The statistical model uses the assumption that a time series of phase or frequency differences is wrapped (periodic) with overall frequency difference removed. We find that the variability at long averaging times is reduced considerably for the five models of power-law noise commonly encountered with frequency standards and oscillators.

  18. A computer program for uncertainty analysis integrating regression and Bayesian methods

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Hill, Mary C.; Poeter, Eileen P.; Curtis, Gary

    2014-01-01

    This work develops a new functionality in UCODE_2014 to evaluate Bayesian credible intervals using the Markov Chain Monte Carlo (MCMC) method. The MCMC capability in UCODE_2014 is based on the FORTRAN version of the differential evolution adaptive Metropolis (DREAM) algorithm of Vrugt et al. (2009), which estimates the posterior probability density function of model parameters in high-dimensional and multimodal sampling problems. The UCODE MCMC capability provides eleven prior probability distributions and three ways to initialize the sampling process. It evaluates parametric and predictive uncertainties and it has parallel computing capability based on multiple chains to accelerate the sampling process. This paper tests and demonstrates the MCMC capability using a 10-dimensional multimodal mathematical function, a 100-dimensional Gaussian function, and a groundwater reactive transport model. The use of the MCMC capability is made straightforward and flexible by adopting the JUPITER API protocol. With the new MCMC capability, UCODE_2014 can be used to calculate three types of uncertainty intervals, which all can account for prior information: (1) linear confidence intervals which require linearity and Gaussian error assumptions and typically 10s–100s of highly parallelizable model runs after optimization, (2) nonlinear confidence intervals which require a smooth objective function surface and Gaussian observation error assumptions and typically 100s–1,000s of partially parallelizable model runs after optimization, and (3) MCMC Bayesian credible intervals which require few assumptions and commonly 10,000s–100,000s or more partially parallelizable model runs. Ready access allows users to select methods best suited to their work, and to compare methods in many circumstances.

  19. Numerical analysis of one-dimensional temperature data for groundwater/surface-water exchange with 1DTempPro

    NASA Astrophysics Data System (ADS)

    Voytek, E. B.; Drenkelfuss, A.; Day-Lewis, F. D.; Healy, R. W.; Lane, J. W.; Werkema, D. D.

    2012-12-01

    Temperature is a naturally occurring tracer, which can be exploited to infer the movement of water through the vadose and saturated zones, as well as the exchange of water between aquifers and surface-water bodies, such as estuaries, lakes, and streams. One-dimensional (1D) vertical temperature profiles commonly show thermal amplitude attenuation and increasing phase lag of diurnal or seasonal temperature variations with propagation into the subsurface. This behavior is described by the heat-transport equation (i.e., the convection-conduction-dispersion equation), which can be solved analytically in 1D under certain simplifying assumptions (e.g., sinusoidal or steady-state boundary conditions and homogeneous hydraulic and thermal properties). Analysis of 1D temperature profiles using analytical models provides estimates of vertical groundwater/surface-water exchange. The utility of these estimates can be diminished when the model assumptions are violated, as is common in field applications. Alternatively, analysis of 1D temperature profiles using numerical models allows for consideration of more complex and realistic boundary conditions. However, such analyses commonly require model calibration and the development of input files for finite-difference or finite-element codes. To address the calibration and input file requirements, a new computer program, 1DTempPro, is presented that facilitates numerical analysis of vertical 1D temperature profiles. 1DTempPro is a graphical user interface (GUI) to the USGS code VS2DH, which numerically solves the flow- and heat-transport equations. Pre- and post-processor features within 1DTempPro allow the user to calibrate VS2DH models to estimate groundwater/surface-water exchange and hydraulic conductivity in cases where hydraulic head is known. This approach improves groundwater/ surface-water exchange-rate estimates for real-world data with complexities ill-suited for examination with analytical methods. Additionally, the code allows for time-varying temperature and hydraulic boundary conditions. Here, we present the approach and include examples for several datasets from stream/aquifer systems.

  20. Robust inference in discrete hazard models for randomized clinical trials.

    PubMed

    Nguyen, Vinh Q; Gillen, Daniel L

    2012-10-01

    Time-to-event data in which failures are only assessed at discrete time points are common in many clinical trials. Examples include oncology studies where events are observed through periodic screenings such as radiographic scans. When the survival endpoint is acknowledged to be discrete, common methods for the analysis of observed failure times include the discrete hazard models (e.g., the discrete-time proportional hazards and the continuation ratio model) and the proportional odds model. In this manuscript, we consider estimation of a marginal treatment effect in discrete hazard models where the constant treatment effect assumption is violated. We demonstrate that the estimator resulting from these discrete hazard models is consistent for a parameter that depends on the underlying censoring distribution. An estimator that removes the dependence on the censoring mechanism is proposed and its asymptotic distribution is derived. Basing inference on the proposed estimator allows for statistical inference that is scientifically meaningful and reproducible. Simulation is used to assess the performance of the presented methodology in finite samples.

  1. Foraging decisions, patch use, and seasonality in egrets (Aves: ciconiiformes)

    USGS Publications Warehouse

    Erwin, R.M.

    1985-01-01

    Feeding snowy (Egretta thula) and great (Casmerodius albus) egrets were observed during 2 breeding seasons in coastal New Jersey and 2 brief winter periods in northeast Florida (USA). A number of tests based on assumptions of foraging models, predictions from foraging theory, and earlier empirical tests concerning time allocation and movement in foraging patches was made. Few of the expectations based on foraging theory and/or assumptions were supported by the empirical evidence. Snowy egrets fed with greater intensity and efficiency during the breeding season (when young were being fed) than during winter. They also showed some tendency to leave patches when their capture rate declined, and they spent more time foraging in patches when other birds were present nearby. Great egrets showed few of these tendencies, although they did leave patches when their intercapture intervals increased. Satiation differences had some influence on feeding rates in snowy egrets, but only at the end of feeding bouts. Some individuals of both species revisited areas in patches that had recently been exploited, and success rates were usually higher after the 2nd visit. Apparently, for predators of active prey, short-term changes in resource availability ('resource depression') may be more important than resource depletion, a common assumption in most optimal foraging theory models.

  2. Can Moral Hazard Be Resolved by Common-Knowledge in S4n-Knowledge?

    NASA Astrophysics Data System (ADS)

    Matsuhisa, Takashi

    This article investigates the relationship between common-knowledge and agreement in multi-agent system, and to apply the agreement result by common-knowledge to the principal-agent model under non-partition information. We treat the two problems: (1) how we capture the fact that the agents agree on an event or they get consensus on it from epistemic point of view, and (2) how the agreement theorem will be able to make progress to settle a moral hazard problem in the principal-agents model under non-partition information. We shall propose a solution program for the moral hazard in the principal-agents model under non-partition information by common-knowledge. Let us start that the agents have the knowledge structure induced from a reflexive and transitive relation associated with the multi-modal logic S4n. Each agent obtains the membership value of an event under his/her private information, so he/she considers the event as fuzzy set. Specifically consider the situation that the agents commonly know all membership values of the other agents. In this circumstance we shall show the agreement theorem that consensus on the membership values among all agents can still be guaranteed. Furthermore, under certain assumptions we shall show that the moral hazard can be resolved in the principal-agent model when all the expected marginal costs are common-knowledge among the principal and agents.

  3. Sensitivity Analysis of Multiple Informant Models When Data are Not Missing at Random

    PubMed Central

    Blozis, Shelley A.; Ge, Xiaojia; Xu, Shu; Natsuaki, Misaki N.; Shaw, Daniel S.; Neiderhiser, Jenae; Scaramella, Laura; Leve, Leslie; Reiss, David

    2014-01-01

    Missing data are common in studies that rely on multiple informant data to evaluate relationships among variables for distinguishable individuals clustered within groups. Estimation of structural equation models using raw data allows for incomplete data, and so all groups may be retained even if only one member of a group contributes data. Statistical inference is based on the assumption that data are missing completely at random or missing at random. Importantly, whether or not data are missing is assumed to be independent of the missing data. A saturated correlates model that incorporates correlates of the missingness or the missing data into an analysis and multiple imputation that may also use such correlates offer advantages over the standard implementation of SEM when data are not missing at random because these approaches may result in a data analysis problem for which the missingness is ignorable. This paper considers these approaches in an analysis of family data to assess the sensitivity of parameter estimates to assumptions about missing data, a strategy that may be easily implemented using SEM software. PMID:25221420

  4. On Strong Anticipation

    PubMed Central

    Stepp, N.; Turvey, M. T.

    2009-01-01

    We examine Dubois's (2003) distinction between weak anticipation and strong anticipation. Anticipation is weak if it arises from a model of the system via internal simulations. Anticipation is strong if it arises from the system itself via lawful regularities embedded in the system's ordinary mode of functioning. The assumption of weak anticipation dominates cognitive science and neuroscience and in particular the study of perception and action. The assumption of strong anticipation, however, seems to be required by anticipation's ubiquity. It is, for example, characteristic of homeostatic processes at the level of the organism, organs, and cells. We develop the formal distinction between strong and weak anticipation by elaboration of anticipating synchronization, a phenomenon arising from time delays in appropriately coupled dynamical systems. The elaboration is conducted in respect to (a) strictly physical systems, (b) the defining features of circadian rhythms, often viewed as paradigmatic of biological behavior based in internal models, (c) Pavlovian learning, and (d) forward models in motor control. We identify the common thread of strongly anticipatory systems and argue for its significance in furthering understanding of notions such as “internal”, “model” and “prediction”. PMID:20191086

  5. Role of mathematical models in assessment of risk and in attempts to define management strategy.

    PubMed

    Flamm, W G; Winbush, J S

    1984-06-01

    Risk assessment of food-borne carcinogens is becoming a common practice at FDA. Actual risk is not being estimated, only the upper limit of risk. The risk assessment process involves a large number of steps and assumptions, many of which affect the numerical value estimated. The mathematical model which is to be applied is only one of the factors which affect these numerical values. To fulfill the policy objective of using the "worst plausible case" in estimating the upper limit of risk, recognition needs to be given to a proper balancing of assumptions and decisions. Interaction between risk assessors and risk managers should avoid making or giving the appearance of making specific technical decisions such as the choice of the mathematical model. The importance of this emerging field is too great to jeopardize it by inappropriately mixing scientific judgments with policy judgments. The risk manager should understand fully the points and range of uncertainty involved in arriving at the estimates of risk which must necessarily affect the choice of the policy or regulatory options available.

  6. A clinical trial design using the concept of proportional time using the generalized gamma ratio distribution.

    PubMed

    Phadnis, Milind A; Wetmore, James B; Mayo, Matthew S

    2017-11-20

    Traditional methods of sample size and power calculations in clinical trials with a time-to-event end point are based on the logrank test (and its variations), Cox proportional hazards (PH) assumption, or comparison of means of 2 exponential distributions. Of these, sample size calculation based on PH assumption is likely the most common and allows adjusting for the effect of one or more covariates. However, when designing a trial, there are situations when the assumption of PH may not be appropriate. Additionally, when it is known that there is a rapid decline in the survival curve for a control group, such as from previously conducted observational studies, a design based on the PH assumption may confer only a minor statistical improvement for the treatment group that is neither clinically nor practically meaningful. For such scenarios, a clinical trial design that focuses on improvement in patient longevity is proposed, based on the concept of proportional time using the generalized gamma ratio distribution. Simulations are conducted to evaluate the performance of the proportional time method and to identify the situations in which such a design will be beneficial as compared to the standard design using a PH assumption, piecewise exponential hazards assumption, and specific cases of a cure rate model. A practical example in which hemorrhagic stroke patients are randomized to 1 of 2 arms in a putative clinical trial demonstrates the usefulness of this approach by drastically reducing the number of patients needed for study enrollment. Copyright © 2017 John Wiley & Sons, Ltd.

  7. Modeling Spatial Dependencies and Semantic Concepts in Data Mining

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vatsavai, Raju

    Data mining is the process of discovering new patterns and relationships in large datasets. However, several studies have shown that general data mining techniques often fail to extract meaningful patterns and relationships from the spatial data owing to the violation of fundamental geospatial principles. In this tutorial, we introduce basic principles behind explicit modeling of spatial and semantic concepts in data mining. In particular, we focus on modeling these concepts in the widely used classification, clustering, and prediction algorithms. Classification is the process of learning a structure or model (from user given inputs) and applying the known model to themore » new data. Clustering is the process of discovering groups and structures in the data that are ``similar,'' without applying any known structures in the data. Prediction is the process of finding a function that models (explains) the data with least error. One common assumption among all these methods is that the data is independent and identically distributed. Such assumptions do not hold well in spatial data, where spatial dependency and spatial heterogeneity are a norm. In addition, spatial semantics are often ignored by the data mining algorithms. In this tutorial we cover recent advances in explicitly modeling of spatial dependencies and semantic concepts in data mining.« less

  8. Sparse covariance estimation in heterogeneous samples*

    PubMed Central

    Rodríguez, Abel; Lenkoski, Alex; Dobra, Adrian

    2015-01-01

    Standard Gaussian graphical models implicitly assume that the conditional independence among variables is common to all observations in the sample. However, in practice, observations are usually collected from heterogeneous populations where such an assumption is not satisfied, leading in turn to nonlinear relationships among variables. To address such situations we explore mixtures of Gaussian graphical models; in particular, we consider both infinite mixtures and infinite hidden Markov models where the emission distributions correspond to Gaussian graphical models. Such models allow us to divide a heterogeneous population into homogenous groups, with each cluster having its own conditional independence structure. As an illustration, we study the trends in foreign exchange rate fluctuations in the pre-Euro era. PMID:26925189

  9. Estimating Alarm Thresholds for Process Monitoring Data under Different Assumptions about the Data Generating Mechanism

    DOE PAGES

    Burr, Tom; Hamada, Michael S.; Howell, John; ...

    2013-01-01

    Process monitoring (PM) for nuclear safeguards sometimes requires estimation of thresholds corresponding to small false alarm rates. Threshold estimation dates to the 1920s with the Shewhart control chart; however, because possible new roles for PM are being evaluated in nuclear safeguards, it is timely to consider modern model selection options in the context of threshold estimation. One of the possible new PM roles involves PM residuals, where a residual is defined as residual = data − prediction. This paper reviews alarm threshold estimation, introduces model selection options, and considers a range of assumptions regarding the data-generating mechanism for PM residuals.more » Two PM examples from nuclear safeguards are included to motivate the need for alarm threshold estimation. The first example involves mixtures of probability distributions that arise in solution monitoring, which is a common type of PM. The second example involves periodic partial cleanout of in-process inventory, leading to challenging structure in the time series of PM residuals.« less

  10. Changes in corticostriatal connectivity during reinforcement learning in humans.

    PubMed

    Horga, Guillermo; Maia, Tiago V; Marsh, Rachel; Hao, Xuejun; Xu, Dongrong; Duan, Yunsuo; Tau, Gregory Z; Graniello, Barbara; Wang, Zhishun; Kangarlu, Alayar; Martinez, Diana; Packard, Mark G; Peterson, Bradley S

    2015-02-01

    Many computational models assume that reinforcement learning relies on changes in synaptic efficacy between cortical regions representing stimuli and striatal regions involved in response selection, but this assumption has thus far lacked empirical support in humans. We recorded hemodynamic signals with fMRI while participants navigated a virtual maze to find hidden rewards. We fitted a reinforcement-learning algorithm to participants' choice behavior and evaluated the neural activity and the changes in functional connectivity related to trial-by-trial learning variables. Activity in the posterior putamen during choice periods increased progressively during learning. Furthermore, the functional connections between the sensorimotor cortex and the posterior putamen strengthened progressively as participants learned the task. These changes in corticostriatal connectivity differentiated participants who learned the task from those who did not. These findings provide a direct link between changes in corticostriatal connectivity and learning, thereby supporting a central assumption common to several computational models of reinforcement learning. © 2014 Wiley Periodicals, Inc.

  11. Survival estimation and the effects of dependency among animals

    USGS Publications Warehouse

    Schmutz, Joel A.; Ward, David H.; Sedinger, James S.; Rexstad, Eric A.

    1995-01-01

    Survival models assume that fates of individuals are independent, yet the robustness of this assumption has been poorly quantified. We examine how empirically derived estimates of the variance of survival rates are affected by dependency in survival probability among individuals. We used Monte Carlo simulations to generate known amounts of dependency among pairs of individuals and analyzed these data with Kaplan-Meier and Cormack-Jolly-Seber models. Dependency significantly increased these empirical variances as compared to theoretically derived estimates of variance from the same populations. Using resighting data from 168 pairs of black brant, we used a resampling procedure and program RELEASE to estimate empirical and mean theoretical variances. We estimated that the relationship between paired individuals caused the empirical variance of the survival rate to be 155% larger than the empirical variance for unpaired individuals. Monte Carlo simulations and use of this resampling strategy can provide investigators with information on how robust their data are to this common assumption of independent survival probabilities.

  12. Pairing field methods to improve inference in wildlife surveys while accommodating detection covariance.

    PubMed

    Clare, John; McKinney, Shawn T; DePue, John E; Loftin, Cynthia S

    2017-10-01

    It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture-recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters. © 2017 by the Ecological Society of America.

  13. Conceptual Change and Science Achievement Related to a Lesson Sequence on Acids and Bases among African American Alternative High School Students: A Teacher's Practical Arguments and the Voice of the "Other"

    ERIC Educational Resources Information Center

    Wood, Lynda Charese

    2012-01-01

    The study of teaching and learning during the period of translating ideals of reform into classroom practice enables us to understand student-teacher-researcher symbiotic learning. In line with this assumption, the purpose of this study is threefold:(1) observe effects of the "Common Knowledge Construction Model" (CKCM), a conceptual…

  14. Testing Atmospheric Retrieval Modeling Assumptions for Transiting Planet Atmospheres: Preparatory science for the James Webb Space Telescope and beyond.

    NASA Astrophysics Data System (ADS)

    Line, Michael

    The field of transiting exoplanet atmosphere characterization has grown considerably over the past decade given the wealth of photometric and spectroscopic data from the Hubble and Spitzer space telescopes. In order to interpret these data, atmospheric models combined with Bayesian approaches are required. From spectra, these approaches permit us to infer fundamental atmospheric properties and how their compositions can relate back to planet formation. However, such approaches must make a wide range of assumptions regarding the physics/parameterizations included in the model atmospheres. There has yet to be a comprehensive investigation exploring how these model assumptions influence our interpretations of exoplanetary spectra. Understanding the impact of these assumptions is especially important since the James Webb Space Telescope (JWST) is expected to invest a substantial portion of its time observing transiting planet atmospheres. It is therefore prudent to optimize and enhance our tools to maximize the scientific return from the revolutionary data to come. The primary goal of the proposed work is to determine the pieces of information we can robustly learn from transiting planet spectra as obtained by JWST and other future, space-based platforms, by investigating commonly overlooked model assumptions. We propose to explore the following effects and how they impact our ability to infer exoplanet atmospheric properties: 1. Stellar/Planetary Uncertainties: Transit/occultation eclipse depths and subsequent planetary spectra are measured relative to their host stars. How do stellar uncertainties, on radius, effective temperature, metallicity, and gravity, as well as uncertainties in the planetary radius and gravity, propagate into the uncertainties on atmospheric composition and thermal structure? Will these uncertainties significantly bias our atmospheric interpretations? Is it possible to use the relative measurements of the planetary spectra to provide additional constraints on the stellar properties? 2. The "1D" Assumption: Atmospheres are inherently three-dimensional. Many exoplanet atmosphere models, especially within retrieval frameworks, assume 1D physics and chemistry when interpreting spectra. How does this "1D" atmosphere assumption bias our interpretation of exoplanet spectra? Do we have to consider global temperature variations such as day-night contrasts or hot spots? What about spatially inhomogeneous molecular abundances and clouds? How will this change our interpretations of phase resolved spectra? 3. Clouds/Hazes: Understanding how clouds/hazes impact transit spectra is absolutely critical if we are to obtain proper estimates of basic atmospheric quantities. How do the assumptions in cloud physics bias our inferences of molecular abundances in transmission? What kind of data (wavelengths, signal-to-noise, resolution) do we need to infer cloud composition, vertical extent, spatial distribution (patchy or global), and size distributions? The proposed work is relevant and timely to the scope of the NASA Exoplanet Research program. The proposed work aims to further develop the critical theoretical modeling tools required to rigorously interpret transiting exoplanet atmosphere data in order to maximize the science return from JWST and beyond. This work will serve as a benchmark study for defining the data (wavelength ranges, signal-to-noises, and resolutions) required from a modeling perspective to "characterize exoplanets and their atmospheres in order to inform target and operational choices for current NASA missions, and/or targeting, operational, and formulation data for future NASA observatories". Doing so will allow us to better "understand the chemical and physical processes of exoplanets (their atmospheres)" which will ultimately " improve understanding of the origins of exoplanetary systems" through robust planetary elemental abundance determinations.

  15. Accounting for imperfect detection of groups and individuals when estimating abundance.

    PubMed

    Clement, Matthew J; Converse, Sarah J; Royle, J Andrew

    2017-09-01

    If animals are independently detected during surveys, many methods exist for estimating animal abundance despite detection probabilities <1. Common estimators include double-observer models, distance sampling models and combined double-observer and distance sampling models (known as mark-recapture-distance-sampling models; MRDS). When animals reside in groups, however, the assumption of independent detection is violated. In this case, the standard approach is to account for imperfect detection of groups, while assuming that individuals within groups are detected perfectly. However, this assumption is often unsupported. We introduce an abundance estimator for grouped animals when detection of groups is imperfect and group size may be under-counted, but not over-counted. The estimator combines an MRDS model with an N-mixture model to account for imperfect detection of individuals. The new MRDS-Nmix model requires the same data as an MRDS model (independent detection histories, an estimate of distance to transect, and an estimate of group size), plus a second estimate of group size provided by the second observer. We extend the model to situations in which detection of individuals within groups declines with distance. We simulated 12 data sets and used Bayesian methods to compare the performance of the new MRDS-Nmix model to an MRDS model. Abundance estimates generated by the MRDS-Nmix model exhibited minimal bias and nominal coverage levels. In contrast, MRDS abundance estimates were biased low and exhibited poor coverage. Many species of conservation interest reside in groups and could benefit from an estimator that better accounts for imperfect detection. Furthermore, the ability to relax the assumption of perfect detection of individuals within detected groups may allow surveyors to re-allocate resources toward detection of new groups instead of extensive surveys of known groups. We believe the proposed estimator is feasible because the only additional field data required are a second estimate of group size.

  16. Accounting for imperfect detection of groups and individuals when estimating abundance

    USGS Publications Warehouse

    Clement, Matthew J.; Converse, Sarah J.; Royle, J. Andrew

    2017-01-01

    If animals are independently detected during surveys, many methods exist for estimating animal abundance despite detection probabilities <1. Common estimators include double-observer models, distance sampling models and combined double-observer and distance sampling models (known as mark-recapture-distance-sampling models; MRDS). When animals reside in groups, however, the assumption of independent detection is violated. In this case, the standard approach is to account for imperfect detection of groups, while assuming that individuals within groups are detected perfectly. However, this assumption is often unsupported. We introduce an abundance estimator for grouped animals when detection of groups is imperfect and group size may be under-counted, but not over-counted. The estimator combines an MRDS model with an N-mixture model to account for imperfect detection of individuals. The new MRDS-Nmix model requires the same data as an MRDS model (independent detection histories, an estimate of distance to transect, and an estimate of group size), plus a second estimate of group size provided by the second observer. We extend the model to situations in which detection of individuals within groups declines with distance. We simulated 12 data sets and used Bayesian methods to compare the performance of the new MRDS-Nmix model to an MRDS model. Abundance estimates generated by the MRDS-Nmix model exhibited minimal bias and nominal coverage levels. In contrast, MRDS abundance estimates were biased low and exhibited poor coverage. Many species of conservation interest reside in groups and could benefit from an estimator that better accounts for imperfect detection. Furthermore, the ability to relax the assumption of perfect detection of individuals within detected groups may allow surveyors to re-allocate resources toward detection of new groups instead of extensive surveys of known groups. We believe the proposed estimator is feasible because the only additional field data required are a second estimate of group size.

  17. A Comparison of Agent-Based Models and the Parametric G-Formula for Causal Inference.

    PubMed

    Murray, Eleanor J; Robins, James M; Seage, George R; Freedberg, Kenneth A; Hernán, Miguel A

    2017-07-15

    Decision-making requires choosing from treatments on the basis of correctly estimated outcome distributions under each treatment. In the absence of randomized trials, 2 possible approaches are the parametric g-formula and agent-based models (ABMs). The g-formula has been used exclusively to estimate effects in the population from which data were collected, whereas ABMs are commonly used to estimate effects in multiple populations, necessitating stronger assumptions. Here, we describe potential biases that arise when ABM assumptions do not hold. To do so, we estimated 12-month mortality risk in simulated populations differing in prevalence of an unknown common cause of mortality and a time-varying confounder. The ABM and g-formula correctly estimated mortality and causal effects when all inputs were from the target population. However, whenever any inputs came from another population, the ABM gave biased estimates of mortality-and often of causal effects even when the true effect was null. In the absence of unmeasured confounding and model misspecification, both methods produce valid causal inferences for a given population when all inputs are from that population. However, ABMs may result in bias when extrapolated to populations that differ on the distribution of unmeasured outcome determinants, even when the causal network linking variables is identical. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  18. Osmotic Transport across Cell Membranes in Nondilute Solutions: A New Nondilute Solute Transport Equation

    PubMed Central

    Elmoazzen, Heidi Y.; Elliott, Janet A.W.; McGann, Locksley E.

    2009-01-01

    The fundamental physical mechanisms of water and solute transport across cell membranes have long been studied in the field of cell membrane biophysics. Cryobiology is a discipline that requires an understanding of osmotic transport across cell membranes under nondilute solution conditions, yet many of the currently-used transport formalisms make limiting dilute solution assumptions. While dilute solution assumptions are often appropriate under physiological conditions, they are rarely appropriate in cryobiology. The first objective of this article is to review commonly-used transport equations, and the explicit and implicit assumptions made when using the two-parameter and the Kedem-Katchalsky formalisms. The second objective of this article is to describe a set of transport equations that do not make the previous dilute solution or near-equilibrium assumptions. Specifically, a new nondilute solute transport equation is presented. Such nondilute equations are applicable to many fields including cryobiology where dilute solution conditions are not often met. An illustrative example is provided. Utilizing suitable transport equations that fit for two permeability coefficients, fits were as good as with the previous three-parameter model (which includes the reflection coefficient, σ). There is less unexpected concentration dependence with the nondilute transport equations, suggesting that some of the unexpected concentration dependence of permeability is due to the use of inappropriate transport equations. PMID:19348741

  19. Bayes Factor Covariance Testing in Item Response Models.

    PubMed

    Fox, Jean-Paul; Mulder, Joris; Sinharay, Sandip

    2017-12-01

    Two marginal one-parameter item response theory models are introduced, by integrating out the latent variable or random item parameter. It is shown that both marginal response models are multivariate (probit) models with a compound symmetry covariance structure. Several common hypotheses concerning the underlying covariance structure are evaluated using (fractional) Bayes factor tests. The support for a unidimensional factor (i.e., assumption of local independence) and differential item functioning are evaluated by testing the covariance components. The posterior distribution of common covariance components is obtained in closed form by transforming latent responses with an orthogonal (Helmert) matrix. This posterior distribution is defined as a shifted-inverse-gamma, thereby introducing a default prior and a balanced prior distribution. Based on that, an MCMC algorithm is described to estimate all model parameters and to compute (fractional) Bayes factor tests. Simulation studies are used to show that the (fractional) Bayes factor tests have good properties for testing the underlying covariance structure of binary response data. The method is illustrated with two real data studies.

  20. On the validity of the incremental approach to estimate the impact of cities on air quality

    NASA Astrophysics Data System (ADS)

    Thunis, Philippe

    2018-01-01

    The question of how much cities are the sources of their own air pollution is not only theoretical as it is critical to the design of effective strategies for urban air quality planning. In this work, we assess the validity of the commonly used incremental approach to estimate the likely impact of cities on their air pollution. With the incremental approach, the city impact (i.e. the concentration change generated by the city emissions) is estimated as the concentration difference between a rural background and an urban background location, also known as the urban increment. We show that the city impact is in reality made up of the urban increment and two additional components and consequently two assumptions need to be fulfilled for the urban increment to be representative of the urban impact. The first assumption is that the rural background location is not influenced by emissions from within the city whereas the second requires that background concentration levels, obtained with zero city emissions, are equal at both locations. Because the urban impact is not measurable, the SHERPA modelling approach, based on a full air quality modelling system, is used in this work to assess the validity of these assumptions for some European cities. Results indicate that for PM2.5, these two assumptions are far from being fulfilled for many large or medium city sizes. For this type of cities, urban increments are largely underestimating city impacts. Although results are in better agreement for NO2, similar issues are met. In many situations the incremental approach is therefore not an adequate estimate of the urban impact on air pollution. This poses issues in terms of interpretation when these increments are used to define strategic options in terms of air quality planning. We finally illustrate the interest of comparing modelled and measured increments to improve our confidence in the model results.

  1. High Female Survival Promotes Evolution of Protogyny and Sexual Conflict

    PubMed Central

    Degen, Tobias; Hovestadt, Thomas; Mitesser, Oliver; Hölker, Franz

    2015-01-01

    Existing models explaining the evolution of sexual dimorphism in the timing of emergence (SDT) in Lepidoptera assume equal mortality rates for males and females. The limiting assumption of equal mortality rates has the consequence that these models are only able to explain the evolution of emergence of males before females, i.e. protandry—the more common temporal sequence of emergence in Lepidoptera. The models fail, however, in providing adaptive explanations for the evolution of protogyny, where females emerge before males, but protogyny is not rare in insects. The assumption of equal mortality rates seems too restrictive for many insects, such as butterflies. To investigate the influence of unequal mortality rates on the evolution of SDT, we present a generalised version of a previously published model where we relax this assumption. We find that longer life-expectancy of females compared to males can indeed favour the evolution of protogyny as a fitness enhancing strategy. Moreover, the encounter rate between females and males and the sex-ratio are two important factors that also influence the evolution of optimal SDT. If considered independently for females and males the predicted strategies can be shown to be evolutionarily stable (ESS). Under the assumption of equal mortality rates the difference between the females’ and males’ ESS remains typically very small. However, female and male ESS may be quite dissimilar if mortality rates are different. This creates the potential for an ‘evolutionary conflict’ between females and males. Bagworm moths (Lepidoptera: Psychidae) provide an exemplary case where life-history attributes are such that protogyny should indeed be the optimal emergence strategy from the males’ and females’ perspectives: (i) Female longevity is considerably larger than that of males, (ii) encounter rates between females and males are presumably low, and (iii) females mate only once. Protogyny is indeed the general mating strategy found in the bagworm family. PMID:25775473

  2. Designing occupancy studies when false-positive detections occur

    USGS Publications Warehouse

    Clement, Matthew

    2016-01-01

    1.Recently, estimators have been developed to estimate occupancy probabilities when false-positive detections occur during presence-absence surveys. Some of these estimators combine different types of survey data to improve estimates of occupancy. With these estimators, there is a tradeoff between the number of sample units surveyed, and the number and type of surveys at each sample unit. Guidance on efficient design of studies when false positives occur is unavailable. 2.For a range of scenarios, I identified survey designs that minimized the mean square error of the estimate of occupancy. I considered an approach that uses one survey method and two observation states and an approach that uses two survey methods. For each approach, I used numerical methods to identify optimal survey designs when model assumptions were met and parameter values were correctly anticipated, when parameter values were not correctly anticipated, and when the assumption of no unmodelled detection heterogeneity was violated. 3.Under the approach with two observation states, false positive detections increased the number of recommended surveys, relative to standard occupancy models. If parameter values could not be anticipated, pessimism about detection probabilities avoided poor designs. Detection heterogeneity could require more or fewer repeat surveys, depending on parameter values. If model assumptions were met, the approach with two survey methods was inefficient. However, with poor anticipation of parameter values, with detection heterogeneity, or with removal sampling schemes, combining two survey methods could improve estimates of occupancy. 4.Ignoring false positives can yield biased parameter estimates, yet false positives greatly complicate the design of occupancy studies. Specific guidance for major types of false-positive occupancy models, and for two assumption violations common in field data, can conserve survey resources. This guidance can be used to design efficient monitoring programs and studies of species occurrence, species distribution, or habitat selection, when false positives occur during surveys.

  3. Data mining of tree-based models to analyze freeway accident frequency.

    PubMed

    Chang, Li-Yen; Chen, Wen-Chieh

    2005-01-01

    Statistical models, such as Poisson or negative binomial regression models, have been employed to analyze vehicle accident frequency for many years. However, these models have their own model assumptions and pre-defined underlying relationship between dependent and independent variables. If these assumptions are violated, the model could lead to erroneous estimation of accident likelihood. Classification and Regression Tree (CART), one of the most widely applied data mining techniques, has been commonly employed in business administration, industry, and engineering. CART does not require any pre-defined underlying relationship between target (dependent) variable and predictors (independent variables) and has been shown to be a powerful tool, particularly for dealing with prediction and classification problems. This study collected the 2001-2002 accident data of National Freeway 1 in Taiwan. A CART model and a negative binomial regression model were developed to establish the empirical relationship between traffic accidents and highway geometric variables, traffic characteristics, and environmental factors. The CART findings indicated that the average daily traffic volume and precipitation variables were the key determinants for freeway accident frequencies. By comparing the prediction performance between the CART and the negative binomial regression models, this study demonstrates that CART is a good alternative method for analyzing freeway accident frequencies. By comparing the prediction performance between the CART and the negative binomial regression models, this study demonstrates that CART is a good alternative method for analyzing freeway accident frequencies.

  4. Cognitive-psychology expertise and the calculation of the probability of a wrongful conviction.

    PubMed

    Rouder, Jeffrey N; Wixted, John T; Christenfeld, Nicholas J S

    2018-05-08

    Cognitive psychologists are familiar with how their expertise in understanding human perception, memory, and decision-making is applicable to the justice system. They may be less familiar with how their expertise in statistical decision-making and their comfort working in noisy real-world environments is just as applicable. Here we show how this expertise in ideal-observer models may be leveraged to calculate the probability of guilt of Gary Leiterman, a man convicted of murder on the basis of DNA evidence. We show by common probability theory that Leiterman is likely a victim of a tragic contamination event rather than a murderer. Making any calculation of the probability of guilt necessarily relies on subjective assumptions. The conclusion about Leiterman's innocence is not overly sensitive to the assumptions-the probability of innocence remains high for a wide range of reasonable assumptions. We note that cognitive psychologists may be well suited to make these calculations because as working scientists they may be comfortable with the role a reasonable degree of subjectivity plays in analysis.

  5. Autotrophs' challenge to Dynamic Energy Budget theory: Comment on ;Physics of metabolic organization; by Marko Jusup et al.

    NASA Astrophysics Data System (ADS)

    Geček, Sunčana

    2017-03-01

    Jusup and colleagues in the recent review on physics of metabolic organization [1] discuss in detail motivational considerations and common assumptions of Dynamic Energy Budget (DEB) theory, supply readers with a practical guide to DEB-based modeling, demonstrate the construction and dynamics of the standard DEB model, and illustrate several applications. The authors make a step forward from the existing literature by seamlessly bridging over the dichotomy between (i) thermodynamic foundations of the theory (which are often more accessible and understandable to physicists and mathematicians), and (ii) the resulting bioenergetic models (mostly used by biologists in real-world applications).

  6. A generating function approach to HIV transmission with dynamic contact rates

    DOE PAGES

    Romero-Severson, Ethan O.; Meadors, Grant D.; Volz, Erik M.

    2014-04-24

    The basic reproduction number, R 0, is often defined as the average number of infections generated by a newly infected individual in a fully susceptible population. The interpretation, meaning, and derivation of R 0 are controversial. However, in the context of mean field models, R 0 demarcates the epidemic threshold below which the infected population approaches zero in the limit of time. In this manner, R 0 has been proposed as a method for understanding the relative impact of public health interventions with respect to disease eliminations from a theoretical perspective. The use of R 0 is made more complexmore » by both the strong dependency of R 0 on the model form and the stochastic nature of transmission. A common assumption in models of HIV transmission that have closed form expressions for R 0 is that a single individual’s behavior is constant over time. For this research, we derive expressions for both R 0 and probability of an epidemic in a finite population under the assumption that people periodically change their sexual behavior over time. We illustrate the use of generating functions as a general framework to model the effects of potentially complex assumptions on the number of transmissions generated by a newly infected person in a susceptible population. In conclusion, we find that the relationship between the probability of an epidemic and R 0 is not straightforward, but, that as the rate of change in sexual behavior increases both R 0 and the probability of an epidemic also decrease.« less

  7. A generating function approach to HIV transmission with dynamic contact rates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romero-Severson, Ethan O.; Meadors, Grant D.; Volz, Erik M.

    The basic reproduction number, R 0, is often defined as the average number of infections generated by a newly infected individual in a fully susceptible population. The interpretation, meaning, and derivation of R 0 are controversial. However, in the context of mean field models, R 0 demarcates the epidemic threshold below which the infected population approaches zero in the limit of time. In this manner, R 0 has been proposed as a method for understanding the relative impact of public health interventions with respect to disease eliminations from a theoretical perspective. The use of R 0 is made more complexmore » by both the strong dependency of R 0 on the model form and the stochastic nature of transmission. A common assumption in models of HIV transmission that have closed form expressions for R 0 is that a single individual’s behavior is constant over time. For this research, we derive expressions for both R 0 and probability of an epidemic in a finite population under the assumption that people periodically change their sexual behavior over time. We illustrate the use of generating functions as a general framework to model the effects of potentially complex assumptions on the number of transmissions generated by a newly infected person in a susceptible population. In conclusion, we find that the relationship between the probability of an epidemic and R 0 is not straightforward, but, that as the rate of change in sexual behavior increases both R 0 and the probability of an epidemic also decrease.« less

  8. Population genetics inference for longitudinally-sampled mutants under strong selection.

    PubMed

    Lacerda, Miguel; Seoighe, Cathal

    2014-11-01

    Longitudinal allele frequency data are becoming increasingly prevalent. Such samples permit statistical inference of the population genetics parameters that influence the fate of mutant variants. To infer these parameters by maximum likelihood, the mutant frequency is often assumed to evolve according to the Wright-Fisher model. For computational reasons, this discrete model is commonly approximated by a diffusion process that requires the assumption that the forces of natural selection and mutation are weak. This assumption is not always appropriate. For example, mutations that impart drug resistance in pathogens may evolve under strong selective pressure. Here, we present an alternative approximation to the mutant-frequency distribution that does not make any assumptions about the magnitude of selection or mutation and is much more computationally efficient than the standard diffusion approximation. Simulation studies are used to compare the performance of our method to that of the Wright-Fisher and Gaussian diffusion approximations. For large populations, our method is found to provide a much better approximation to the mutant-frequency distribution when selection is strong, while all three methods perform comparably when selection is weak. Importantly, maximum-likelihood estimates of the selection coefficient are severely attenuated when selection is strong under the two diffusion models, but not when our method is used. This is further demonstrated with an application to mutant-frequency data from an experimental study of bacteriophage evolution. We therefore recommend our method for estimating the selection coefficient when the effective population size is too large to utilize the discrete Wright-Fisher model. Copyright © 2014 by the Genetics Society of America.

  9. Unifying error structures in commonly used biotracer mixing models.

    PubMed

    Stock, Brian C; Semmens, Brice X

    2016-10-01

    Mixing models are statistical tools that use biotracers to probabilistically estimate the contribution of multiple sources to a mixture. These biotracers may include contaminants, fatty acids, or stable isotopes, the latter of which are widely used in trophic ecology to estimate the mixed diet of consumers. Bayesian implementations of mixing models using stable isotopes (e.g., MixSIR, SIAR) are regularly used by ecologists for this purpose, but basic questions remain about when each is most appropriate. In this study, we describe the structural differences between common mixing model error formulations in terms of their assumptions about the predation process. We then introduce a new parameterization that unifies these mixing model error structures, as well as implicitly estimates the rate at which consumers sample from source populations (i.e., consumption rate). Using simulations and previously published mixing model datasets, we demonstrate that the new error parameterization outperforms existing models and provides an estimate of consumption. Our results suggest that the error structure introduced here will improve future mixing model estimates of animal diet. © 2016 by the Ecological Society of America.

  10. Hypnosis in sport: an Isomorphic Model.

    PubMed

    Robazza, C; Bortoli, L

    1994-10-01

    Hypnosis in sport can be applied according to an Isomorphic Model. Active-alert hypnosis is induced before or during practice whereas traditional hypnosis is induced after practice to establish connections between the two experiences. The fundamental goals are to (a) develop mental skills important to both motor and hypnotic performance, (b) supply a wide range of motor and hypnotic bodily experiences important to performance, and (c) induce alert hypnosis before or during performance. The model is based on the assumption that hypnosis and motor performance share common skills modifiable through training. Similarities between hypnosis and peak performance in the model are also considered. Some predictions are important from theoretical and practical points of view.

  11. Single-phase power distribution system power flow and fault analysis

    NASA Technical Reports Server (NTRS)

    Halpin, S. M.; Grigsby, L. L.

    1992-01-01

    Alternative methods for power flow and fault analysis of single-phase distribution systems are presented. The algorithms for both power flow and fault analysis utilize a generalized approach to network modeling. The generalized admittance matrix, formed using elements of linear graph theory, is an accurate network model for all possible single-phase network configurations. Unlike the standard nodal admittance matrix formulation algorithms, the generalized approach uses generalized component models for the transmission line and transformer. The standard assumption of a common node voltage reference point is not required to construct the generalized admittance matrix. Therefore, truly accurate simulation results can be obtained for networks that cannot be modeled using traditional techniques.

  12. Using data mining to predict success in a weight loss trial.

    PubMed

    Batterham, M; Tapsell, L; Charlton, K; O'Shea, J; Thorne, R

    2017-08-01

    Traditional methods for predicting weight loss success use regression approaches, which make the assumption that the relationships between the independent and dependent (or logit of the dependent) variable are linear. The aim of the present study was to investigate the relationship between common demographic and early weight loss variables to predict weight loss success at 12 months without making this assumption. Data mining methods (decision trees, generalised additive models and multivariate adaptive regression splines), in addition to logistic regression, were employed to predict: (i) weight loss success (defined as ≥5%) at the end of a 12-month dietary intervention using demographic variables [body mass index (BMI), sex and age]; percentage weight loss at 1 month; and (iii) the difference between actual and predicted weight loss using an energy balance model. The methods were compared by assessing model parsimony and the area under the curve (AUC). The decision tree provided the most clinically useful model and had a good accuracy (AUC 0.720 95% confidence interval = 0.600-0.840). Percentage weight loss at 1 month (≥0.75%) was the strongest predictor for successful weight loss. Within those individuals losing ≥0.75%, individuals with a BMI (≥27 kg m -2 ) were more likely to be successful than those with a BMI between 25 and 27 kg m -2 . Data mining methods can provide a more accurate way of assessing relationships when conventional assumptions are not met. In the present study, a decision tree provided the most parsimonious model. Given that early weight loss cannot be predicted before randomisation, incorporating this information into a post randomisation trial design may give better weight loss results. © 2017 The British Dietetic Association Ltd.

  13. Toward Identifying Needed Investments in Modeling and Simulation Tools for NEO Deflection Planning

    NASA Technical Reports Server (NTRS)

    Adams, Robert B.

    2009-01-01

    Its time: a) To bring planetary scientists, deflection system investigators and vehicle designers together on the characterization/mitigation problem. b) To develop a comprehensive trade space of options. c) To trade options under a common set of assumptions and see what comparisons on effectiveness can be made. d) To explore the synergy that can be had with proposed scientific and exploration architectures while interest in NEO's are at an all time high.

  14. Mathematical Modeling: Are Prior Experiences Important?

    ERIC Educational Resources Information Center

    Czocher, Jennifer A.; Moss, Diana L.

    2017-01-01

    Why are math modeling problems the source of such frustration for students and teachers? The conceptual understanding that students have when engaging with a math modeling problem varies greatly. They need opportunities to make their own assumptions and design the mathematics to fit these assumptions (CCSSI 2010). Making these assumptions is part…

  15. Assumptions to the Annual Energy Outlook

    EIA Publications

    2017-01-01

    This report presents the major assumptions of the National Energy Modeling System (NEMS) used to generate the projections in the Annual Energy Outlook, including general features of the model structure, assumptions concerning energy markets, and the key input data and parameters that are the most significant in formulating the model results.

  16. A Multi-Sector Assessment of the Effects of Climate Change at the Energy-Water-Land Nexus in the US

    NASA Astrophysics Data System (ADS)

    McFarland, J.; Sarofim, M. C.; Martinich, J.

    2017-12-01

    Rising temperatures and changing precipitation patterns due to climate change are projected to alter many sectors of the US economy. A growing body of research has examined these effects in the energy, water, and agricultural sectors. Rising summer temperatures increase the demand for electricity. Changing precipitation patterns effect the availability of water for hydropower generation, thermo-electric cooling, irrigation, and municipal and industrial consumption. A combination of changes to temperature and precipitation alter crop yields and cost-effective farming practices. Although a significant body of research exists on analyzing impacts to individual sectors, fewer studies examine the effects using a common set of assumptions (e.g., climatic and socio-economic) within a coupled modeling framework. The present analysis uses a multi-sector, multi-model framework with common input assumptions to assess the projected effects of climate change on energy, water, and land-use in the United States. The analysis assesses the climate impacts for across 5 global circulation models for representative concentration pathways (RCP) of 8.5 and 4.5 W/m2. The energy sector models - Pacific Northwest National Lab's Global Change Assessment Model (GCAM) and the National Renewable Energy Laboratory's Regional Energy Deployment System (ReEDS) - show the effects of rising temperature on energy and electricity demand. Electricity supply in ReEDS is also affected by the availability of water for hydropower and thermo-electric cooling. Water availability is calculated from the GCM's precipitation using the US Basins model. The effects on agriculture are estimated using both a process-based crop model (EPIC) and an agricultural economic model (FASOM-GHG), which adjusts water supply curves based on information from US Basins. The sectoral models show higher economic costs of climate change under RCP 8.5 than RCP 4.5 averaged across the country and across GCM's.

  17. A Marginal Cost Based "Social Cost of Carbon" Provides Inappropriate Guidance in a World That Needs Rapid and Deep Decarbonization

    NASA Astrophysics Data System (ADS)

    Morgan, M. G.; Vaishnav, P.; Azevedo, I. L.; Dowlatabadi, H.

    2016-12-01

    Rising temperatures and changing precipitation patterns due to climate change are projected to alter many sectors of the US economy. A growing body of research has examined these effects in the energy, water, and agricultural sectors. Rising summer temperatures increase the demand for electricity. Changing precipitation patterns effect the availability of water for hydropower generation, thermo-electric cooling, irrigation, and municipal and industrial consumption. A combination of changes to temperature and precipitation alter crop yields and cost-effective farming practices. Although a significant body of research exists on analyzing impacts to individual sectors, fewer studies examine the effects using a common set of assumptions (e.g., climatic and socio-economic) within a coupled modeling framework. The present analysis uses a multi-sector, multi-model framework with common input assumptions to assess the projected effects of climate change on energy, water, and land-use in the United States. The analysis assesses the climate impacts for across 5 global circulation models for representative concentration pathways (RCP) of 8.5 and 4.5 W/m2. The energy sector models - Pacific Northwest National Lab's Global Change Assessment Model (GCAM) and the National Renewable Energy Laboratory's Regional Energy Deployment System (ReEDS) - show the effects of rising temperature on energy and electricity demand. Electricity supply in ReEDS is also affected by the availability of water for hydropower and thermo-electric cooling. Water availability is calculated from the GCM's precipitation using the US Basins model. The effects on agriculture are estimated using both a process-based crop model (EPIC) and an agricultural economic model (FASOM-GHG), which adjusts water supply curves based on information from US Basins. The sectoral models show higher economic costs of climate change under RCP 8.5 than RCP 4.5 averaged across the country and across GCM's.

  18. A sensitivity study of the effects of evaporation/condensation accommodation coefficients on transient heat pipe modeling

    NASA Astrophysics Data System (ADS)

    Hall, Michael L.; Doster, J. Michael

    1990-03-01

    The dynamic behavior of liquid metal heat pipe models is strongly influenced by the choice of evaporation and condensation modeling techniques. Classic kinetic theory descriptions of the evaporation and condensation processes are often inadequate for real situations; empirical accommodation coefficients are commonly utilized to reflect nonideal mass transfer rates. The complex geometries and flow fields found in proposed heat pipe systems cause considerable deviation from the classical models. the THROHPUT code, which has been described in previous works, was developed to model transient liquid metal heat pipe behavior from frozen startup conditions to steady state full power operation. It is used here to evaluate the sensitivity of transient liquid metal heat pipe models to the choice of evaporation and condensation accommodation coefficients. Comparisons are made with experimental liquid metal heat pipe data. It is found that heat pipe behavior can be predicted with the proper choice of the accommodation coefficients. However, the common assumption of spatially constant accommodation coefficients is found to be a limiting factor in the model.

  19. Assumptions made when preparing drug exposure data for analysis have an impact on results: An unreported step in pharmacoepidemiology studies.

    PubMed

    Pye, Stephen R; Sheppard, Thérèse; Joseph, Rebecca M; Lunt, Mark; Girard, Nadyne; Haas, Jennifer S; Bates, David W; Buckeridge, David L; van Staa, Tjeerd P; Tamblyn, Robyn; Dixon, William G

    2018-04-17

    Real-world data for observational research commonly require formatting and cleaning prior to analysis. Data preparation steps are rarely reported adequately and are likely to vary between research groups. Variation in methodology could potentially affect study outcomes. This study aimed to develop a framework to define and document drug data preparation and to examine the impact of different assumptions on results. An algorithm for processing prescription data was developed and tested using data from the Clinical Practice Research Datalink (CPRD). The impact of varying assumptions was examined by estimating the association between 2 exemplar medications (oral hypoglycaemic drugs and glucocorticoids) and cardiovascular events after preparing multiple datasets derived from the same source prescription data. Each dataset was analysed using Cox proportional hazards modelling. The algorithm included 10 decision nodes and 54 possible unique assumptions. Over 11 000 possible pathways through the algorithm were identified. In both exemplar studies, similar hazard ratios and standard errors were found for the majority of pathways; however, certain assumptions had a greater influence on results. For example, in the hypoglycaemic analysis, choosing a different variable to define prescription end date altered the hazard ratios (95% confidence intervals) from 1.77 (1.56-2.00) to 2.83 (1.59-5.04). The framework offers a transparent and efficient way to perform and report drug data preparation steps. Assumptions made during data preparation can impact the results of analyses. Improving transparency regarding drug data preparation would increase the repeatability, reproducibility, and comparability of published results. © 2018 The Authors. Pharmacoepidemiology & Drug Safety Published by John Wiley & Sons Ltd.

  20. A model of interval timing by neural integration.

    PubMed

    Simen, Patrick; Balci, Fuat; de Souza, Laura; Cohen, Jonathan D; Holmes, Philip

    2011-06-22

    We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes, that correlations among them can be largely cancelled by balancing excitation and inhibition, that neural populations can act as integrators, and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys, and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule's predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior.

  1. Global Well-Posedness and Decay Rates of Strong Solutions to a Non-Conservative Compressible Two-Fluid Model

    NASA Astrophysics Data System (ADS)

    Evje, Steinar; Wang, Wenjun; Wen, Huanyao

    2016-09-01

    In this paper, we consider a compressible two-fluid model with constant viscosity coefficients and unequal pressure functions {P^+neq P^-}. As mentioned in the seminal work by Bresch, Desjardins, et al. (Arch Rational Mech Anal 196:599-629, 2010) for the compressible two-fluid model, where {P^+=P^-} (common pressure) is used and capillarity effects are accounted for in terms of a third-order derivative of density, the case of constant viscosity coefficients cannot be handled in their settings. Besides, their analysis relies on a special choice for the density-dependent viscosity [refer also to another reference (Commun Math Phys 309:737-755, 2012) by Bresch, Huang and Li for a study of the same model in one dimension but without capillarity effects]. In this work, we obtain the global solution and its optimal decay rate (in time) with constant viscosity coefficients and some smallness assumptions. In particular, capillary pressure is taken into account in the sense that {Δ P=P^+ - P^-=fneq 0} where the difference function {f} is assumed to be a strictly decreasing function near the equilibrium relative to the fluid corresponding to {P^-}. This assumption plays an key role in the analysis and appears to have an essential stabilization effect on the model in question.

  2. Technical and biological variance structure in mRNA-Seq data: life in the real world

    PubMed Central

    2012-01-01

    Background mRNA expression data from next generation sequencing platforms is obtained in the form of counts per gene or exon. Counts have classically been assumed to follow a Poisson distribution in which the variance is equal to the mean. The Negative Binomial distribution which allows for over-dispersion, i.e., for the variance to be greater than the mean, is commonly used to model count data as well. Results In mRNA-Seq data from 25 subjects, we found technical variation to generally follow a Poisson distribution as has been reported previously and biological variability was over-dispersed relative to the Poisson model. The mean-variance relationship across all genes was quadratic, in keeping with a Negative Binomial (NB) distribution. Over-dispersed Poisson and NB distributional assumptions demonstrated marked improvements in goodness-of-fit (GOF) over the standard Poisson model assumptions, but with evidence of over-fitting in some genes. Modeling of experimental effects improved GOF for high variance genes but increased the over-fitting problem. Conclusions These conclusions will guide development of analytical strategies for accurate modeling of variance structure in these data and sample size determination which in turn will aid in the identification of true biological signals that inform our understanding of biological systems. PMID:22769017

  3. Extracurricular Business Planning Competitions: Challenging the Assumptions

    ERIC Educational Resources Information Center

    Watson, Kayleigh; McGowan, Pauric; Smith, Paul

    2014-01-01

    Business planning competitions [BPCs] are a commonly offered yet under-examined extracurricular activity. Given the extent of sceptical comment about business planning, this paper offers what the authors believe is a much-needed critical discussion of the assumptions that underpin the provision of such competitions. In doing so it is suggested…

  4. Introduction to the Application of Web-Based Surveys.

    ERIC Educational Resources Information Center

    Timmerman, Annemarie

    This paper discusses some basic assumptions and issues concerning web-based surveys. Discussion includes: assumptions regarding cost and ease of use; disadvantages of web-based surveys, concerning the inability to compensate for four common errors of survey research: coverage error, sampling error, measurement error and nonresponse error; and…

  5. Diagnostic tools for nearest neighbors techniques when used with satellite imagery

    Treesearch

    Ronald E. McRoberts

    2009-01-01

    Nearest neighbors techniques are non-parametric approaches to multivariate prediction that are useful for predicting both continuous and categorical forest attribute variables. Although some assumptions underlying nearest neighbor techniques are common to other prediction techniques such as regression, other assumptions are unique to nearest neighbor techniques....

  6. Who Takes College Algebra?

    ERIC Educational Resources Information Center

    Herriott, Scott R.; Dunbar, Steven R.

    2009-01-01

    The common understanding within the mathematics community is that the role of the college algebra course is to prepare students for calculus. Though exceptions are emerging, the curriculum of most college algebra courses and the content of most textbooks on the market both reflect that assumption. This article calls that assumption into question…

  7. Preparing Democratic Education Leaders

    ERIC Educational Resources Information Center

    Young, Michelle D.

    2010-01-01

    Although it is common to hear people espouse the importance of education to ensuring a strong and vibrant democracy, the assumptions underlying such statements are rarely unpacked. Two of the most widespread, though not necessarily complimentary, assumptions include: (1) to truly participate in a democracy, citizens must be well educated; and (2)…

  8. Modeling the impact of common noise inputs on the network activity of retinal ganglion cells

    PubMed Central

    Ahmadian, Yashar; Shlens, Jonathon; Pillow, Jonathan W.; Kulkarni, Jayant; Litke, Alan M.; Chichilnisky, E. J.; Simoncelli, Eero; Paninski, Liam

    2013-01-01

    Synchronized spontaneous firing among retinal ganglion cells (RGCs), on timescales faster than visual responses, has been reported in many studies. Two candidate mechanisms of synchronized firing include direct coupling and shared noisy inputs. In neighboring parasol cells of primate retina, which exhibit rapid synchronized firing that has been studied extensively, recent experimental work indicates that direct electrical or synaptic coupling is weak, but shared synaptic input in the absence of modulated stimuli is strong. However, previous modeling efforts have not accounted for this aspect of firing in the parasol cell population. Here we develop a new model that incorporates the effects of common noise, and apply it to analyze the light responses and synchronized firing of a large, densely-sampled network of over 250 simultaneously recorded parasol cells. We use a generalized linear model in which the spike rate in each cell is determined by the linear combination of the spatio-temporally filtered visual input, the temporally filtered prior spikes of that cell, and unobserved sources representing common noise. The model accurately captures the statistical structure of the spike trains and the encoding of the visual stimulus, without the direct coupling assumption present in previous modeling work. Finally, we examined the problem of decoding the visual stimulus from the spike train given the estimated parameters. The common-noise model produces Bayesian decoding performance as accurate as that of a model with direct coupling, but with significantly more robustness to spike timing perturbations. PMID:22203465

  9. Development of state and transition model assumptions used in National Forest Plan revision

    Treesearch

    Eric B. Henderson

    2008-01-01

    State and transition models are being utilized in forest management analysis processes to evaluate assumptions about disturbances and succession. These models assume valid information about seral class successional pathways and timing. The Forest Vegetation Simulator (FVS) was used to evaluate seral class succession assumptions for the Hiawatha National Forest in...

  10. Assumptions to the annual energy outlook 1999 : with projections to 2020

    DOT National Transportation Integrated Search

    1998-12-16

    This paper presents the major assumptions of the National Energy Modeling System (NEMS) used to : generate the projections in the Annual Energy Outlook 19991 (AEO99), including general features of : the model structure, assumptions concerning energy ...

  11. Assumptions to the annual energy outlook 2000 : with projections to 2020

    DOT National Transportation Integrated Search

    2000-01-01

    This paper presents the major assumptions of the National Energy Modeling System (NEMS) used to : generate the projections in the Annual Energy Outlook 20001 (AEO2000), including general features of : the model structure, assumptions concerning energ...

  12. Assumptions to the annual energy outlook 2001 : with projections to 2020

    DOT National Transportation Integrated Search

    2000-12-01

    This report presents the major assumptions of the National Energy Modeling System (NEMS) used to : generate the projections in the Annual Energy Outlook 20011 (AEO2001), including general features of : the model structure, assumptions concerning ener...

  13. Assumptions for the annual energy outlook 2003 : with projections to 2025

    DOT National Transportation Integrated Search

    2003-01-01

    This report presents the major assumptions of the National Energy Modeling System (NEMS) used to : generate the projections in the Annual Energy Outlook 20031 (AEO2003), including general features of : the model structure, assumptions concerning ener...

  14. Potential implications of the bystander effect on TCP and EUD when considering target volume dose heterogeneity.

    PubMed

    Balderson, Michael J; Kirkby, Charles

    2015-01-01

    In light of in vitro evidence suggesting that radiation-induced bystander effects may enhance non-local cell killing, there is potential for impact on radiotherapy treatment planning paradigms such as the goal of delivering a uniform dose throughout the clinical target volume (CTV). This work applies a bystander effect model to calculate equivalent uniform dose (EUD) and tumor control probability (TCP) for external beam prostate treatment and compares the results with a more common model where local response is dictated exclusively by local absorbed dose. The broad assumptions applied in the bystander effect model are intended to place an upper limit on the extent of the results in a clinical context. EUD and TCP of a prostate cancer target volume under conditions of increasing dose heterogeneity were calculated using two models: One incorporating bystander effects derived from previously published in vitro bystander data ( McMahon et al. 2012 , 2013a); and one using a common linear-quadratic (LQ) response that relies exclusively on local absorbed dose. Dose through the CTV was modelled as a normal distribution, where the degree of heterogeneity was then dictated by changing the standard deviation (SD). Also, a representative clinical dose distribution was examined as cold (low dose) sub-volumes were systematically introduced. The bystander model suggests a moderate degree of dose heterogeneity throughout a target volume will yield as good or better outcome compared to a uniform dose in terms of EUD and TCP. For a typical intermediate risk prostate prescription of 78 Gy over 39 fractions maxima in EUD and TCP as a function of increasing SD occurred at SD ∼ 5 Gy. The plots only dropped below the uniform dose values for SD ∼ 10 Gy, almost 13% of the prescribed dose. Small, but potentially significant differences in the outcome metrics between the models were identified in the clinically-derived dose distribution as cold sub-volumes were introduced. In terms of EUD and TCP, the bystander model demonstrates the potential to deviate from the common local LQ model predictions as dose heterogeneity through a prostate CTV varies. The results suggest, at least in a limiting sense, the potential for allowing some degree of dose heterogeneity within a CTV, although further investigation of the assumptions of the bystander model are warranted.

  15. Assessing moderated mediation in linear models requires fewer confounding assumptions than assessing mediation.

    PubMed

    Loeys, Tom; Talloen, Wouter; Goubert, Liesbet; Moerkerke, Beatrijs; Vansteelandt, Stijn

    2016-11-01

    It is well known from the mediation analysis literature that the identification of direct and indirect effects relies on strong no unmeasured confounding assumptions of no unmeasured confounding. Even in randomized studies the mediator may still be correlated with unobserved prognostic variables that affect the outcome, in which case the mediator's role in the causal process may not be inferred without bias. In the behavioural and social science literature very little attention has been given so far to the causal assumptions required for moderated mediation analysis. In this paper we focus on the index for moderated mediation, which measures by how much the mediated effect is larger or smaller for varying levels of the moderator. We show that in linear models this index can be estimated without bias in the presence of unmeasured common causes of the moderator, mediator and outcome under certain conditions. Importantly, one can thus use the test for moderated mediation to support evidence for mediation under less stringent confounding conditions. We illustrate our findings with data from a randomized experiment assessing the impact of being primed with social deception upon observer responses to others' pain, and from an observational study of individuals who ended a romantic relationship assessing the effect of attachment anxiety during the relationship on mental distress 2 years after the break-up. © 2016 The British Psychological Society.

  16. Equilibrium of Global Amphibian Species Distributions with Climate

    PubMed Central

    Munguía, Mariana; Rahbek, Carsten; Rangel, Thiago F.; Diniz-Filho, Jose Alexandre F.; Araújo, Miguel B.

    2012-01-01

    A common assumption in bioclimatic envelope modeling is that species distributions are in equilibrium with contemporary climate. A number of studies have measured departures from equilibrium in species distributions in particular regions, but such investigations were never carried out for a complete lineage across its entire distribution. We measure departures of equilibrium with contemporary climate for the distributions of the world amphibian species. Specifically, we fitted bioclimatic envelopes for 5544 species using three presence-only models. We then measured the proportion of the modeled envelope that is currently occupied by the species, as a metric of equilibrium of species distributions with climate. The assumption was that the greater the difference between modeled bioclimatic envelope and the occupied distribution, the greater the likelihood that species distribution would not be at equilibrium with contemporary climate. On average, amphibians occupied 30% to 57% of their potential distributions. Although patterns differed across regions, there were no significant differences among lineages. Species in the Neotropic, Afrotropics, Indo-Malay, and Palaearctic occupied a smaller proportion of their potential distributions than species in the Nearctic, Madagascar, and Australasia. We acknowledge that our models underestimate non equilibrium, and discuss potential reasons for the observed patterns. From a modeling perspective our results support the view that at global scale bioclimatic envelope models might perform similarly across lineages but differently across regions. PMID:22511938

  17. EasyDelta: A spreadsheet for kinetic modeling of the stable carbon isotope composition of natural gases

    NASA Astrophysics Data System (ADS)

    Zou, Yan-Rong; Wang, Lianyuan; Shuai, Yanhua; Peng, Ping'an

    2005-08-01

    A new kinetic model and an Excel © spreadsheet program for modeling the stable carbon isotope composition of natural gases is provided in this paper. The model and spreadsheet could be used to describe and predict the variances in stable carbon isotope of natural gases under both experimental and geological conditions with heating temperature or geological time. It is a user-friendly convenient tool for the modeling of isotope variation with time under experimental and geological conditions. The spreadsheet, based on experimental data, requires the input of the kinetic parameters of gaseous hydrocarbons generation. Some assumptions are made in this model: the conventional (non-isotope species) kinetic parameters represent the light isotope species; the initial isotopic value is the same for all parallel chemical reaction of gaseous hydrocarbons generation for simplicity, the re-exponential factor ratio, 13A/ 12A, is a constant, and both heavy and light isotope species have similar activation energy distribution. These assumptions are common in modeling of isotope ratios. The spreadsheet is used for searching the best kinetic parameters of the heavy isotope species to reach the minimum errors compared with experimental data, and then extrapolating isotopic changes to the thermal history of sedimentary basins. A short calculation example on the variation in δ13C values of methane is provided in this paper to show application to geological conditions.

  18. Combustion Technology for Incinerating Wastes from Air Force Industrial Processes.

    DTIC Science & Technology

    1984-02-01

    The assumption of equilibrium between environmental compartments. * The statistical extrapolations yielding "safe" doses of various constituents...would be contacted to identify the assumptions and data requirements needed to design, construct and implement the model. The model’s primary objective...Recovery Planning Model (RRPLAN) is described. This section of the paper summarizes the model’s assumptions , major components and modes of operation

  19. Modeling Bivariate Change in Individual Differences: Prospective Associations Between Personality and Life Satisfaction.

    PubMed

    Hounkpatin, Hilda Osafo; Boyce, Christopher J; Dunn, Graham; Wood, Alex M

    2017-09-18

    A number of structural equation models have been developed to examine change in 1 variable or the longitudinal association between 2 variables. The most common of these are the latent growth model, the autoregressive cross-lagged model, the autoregressive latent trajectory model, and the latent change score model. The authors first overview each of these models through evaluating their different assumptions surrounding the nature of change and how these assumptions may result in different data interpretations. They then, to elucidate these issues in an empirical example, examine the longitudinal association between personality traits and life satisfaction. In a representative Dutch sample (N = 8,320), with participants providing data on both personality and life satisfaction measures every 2 years over an 8-year period, the authors reproduce findings from previous research. However, some of the structural equation models overviewed have not previously been applied to the personality-life satisfaction relation. The extended empirical examination suggests intraindividual changes in life satisfaction predict subsequent intraindividual changes in personality traits. The availability of data sets with 3 or more assessment waves allows the application of more advanced structural equation models such as the autoregressive latent trajectory or the extended latent change score model, which accounts for the complex dynamic nature of change processes and allows stronger inferences on the nature of the association between variables. However, the choice of model should be determined by theories of change processes in the variables being studied. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  20. cit: hypothesis testing software for mediation analysis in genomic applications.

    PubMed

    Millstein, Joshua; Chen, Gary K; Breton, Carrie V

    2016-08-01

    The challenges of successfully applying causal inference methods include: (i) satisfying underlying assumptions, (ii) limitations in data/models accommodated by the software and (iii) low power of common multiple testing approaches. The causal inference test (CIT) is based on hypothesis testing rather than estimation, allowing the testable assumptions to be evaluated in the determination of statistical significance. A user-friendly software package provides P-values and optionally permutation-based FDR estimates (q-values) for potential mediators. It can handle single and multiple binary and continuous instrumental variables, binary or continuous outcome variables and adjustment covariates. Also, the permutation-based FDR option provides a non-parametric implementation. Simulation studies demonstrate the validity of the cit package and show a substantial advantage of permutation-based FDR over other common multiple testing strategies. The cit open-source R package is freely available from the CRAN website (https://cran.r-project.org/web/packages/cit/index.html) with embedded C ++ code that utilizes the GNU Scientific Library, also freely available (http://www.gnu.org/software/gsl/). joshua.millstein@usc.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. Comparing the Performance of Approaches for Testing the Homogeneity of Variance Assumption in One-Factor ANOVA Models

    ERIC Educational Resources Information Center

    Wang, Yan; Rodríguez de Gil, Patricia; Chen, Yi-Hsin; Kromrey, Jeffrey D.; Kim, Eun Sook; Pham, Thanh; Nguyen, Diep; Romano, Jeanine L.

    2017-01-01

    Various tests to check the homogeneity of variance assumption have been proposed in the literature, yet there is no consensus as to their robustness when the assumption of normality does not hold. This simulation study evaluated the performance of 14 tests for the homogeneity of variance assumption in one-way ANOVA models in terms of Type I error…

  2. Single-Molecule Test for Markovianity of the Dynamics along a Reaction Coordinate.

    PubMed

    Berezhkovskii, Alexander M; Makarov, Dmitrii E

    2018-05-03

    In an effort to answer the much-debated question of whether the time evolution of common experimental observables can be described as one-dimensional diffusion in the potential of mean force, we propose a simple criterion that allows one to test whether the Markov assumption is applicable to a single-molecule trajectory x( t). This test does not involve fitting of the data to any presupposed model and can be applied to experimental data with relatively low temporal resolution.

  3. 2024 Unmanned Undersea Warfare Concept

    DTIC Science & Technology

    2013-06-01

    mine. Assumptions are that the high-tech mine would have a 400 - meter range that spans 360 degrees, a 90% probability of detecting a HVU, and a 30...motor volume – The electric propulsion motor is assumed to be 0.127 cubic meters . A common figure of 24” x 18” x 18” is assumed. This size will allow...regard to propagation loss is assumed to be 400 HZ. Using Excel spreadsheet modeling, the maximum range is determined by finding that range resulting in

  4. Causal mediation analysis with a latent mediator.

    PubMed

    Albert, Jeffrey M; Geng, Cuiyu; Nelson, Suchitra

    2016-05-01

    Health researchers are often interested in assessing the direct effect of a treatment or exposure on an outcome variable, as well as its indirect (or mediation) effect through an intermediate variable (or mediator). For an outcome following a nonlinear model, the mediation formula may be used to estimate causally interpretable mediation effects. This method, like others, assumes that the mediator is observed. However, as is common in structural equations modeling, we may wish to consider a latent (unobserved) mediator. We follow a potential outcomes framework and assume a generalized structural equations model (GSEM). We provide maximum-likelihood estimation of GSEM parameters using an approximate Monte Carlo EM algorithm, coupled with a mediation formula approach to estimate natural direct and indirect effects. The method relies on an untestable sequential ignorability assumption; we assess robustness to this assumption by adapting a recently proposed method for sensitivity analysis. Simulation studies show good properties of the proposed estimators in plausible scenarios. Our method is applied to a study of the effect of mother education on occurrence of adolescent dental caries, in which we examine possible mediation through latent oral health behavior. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Critical appraisal of assumptions in chains of model calculations used to project local climate impacts for adaptation decision support—the case of Baakse Beek

    NASA Astrophysics Data System (ADS)

    van der Sluijs, Jeroen P.; Arjan Wardekker, J.

    2015-04-01

    In order to enable anticipation and proactive adaptation, local decision makers increasingly seek detailed foresight about regional and local impacts of climate change. To this end, the Netherlands Models and Data-Centre implemented a pilot chain of sequentially linked models to project local climate impacts on hydrology, agriculture and nature under different national climate scenarios for a small region in the east of the Netherlands named Baakse Beek. The chain of models sequentially linked in that pilot includes a (future) weather generator and models of respectively subsurface hydrogeology, ground water stocks and flows, soil chemistry, vegetation development, crop yield and nature quality. These models typically have mismatching time step sizes and grid cell sizes. The linking of these models unavoidably involves the making of model assumptions that can hardly be validated, such as those needed to bridge the mismatches in spatial and temporal scales. Here we present and apply a method for the systematic critical appraisal of model assumptions that seeks to identify and characterize the weakest assumptions in a model chain. The critical appraisal of assumptions presented in this paper has been carried out ex-post. For the case of the climate impact model chain for Baakse Beek, the three most problematic assumptions were found to be: land use and land management kept constant over time; model linking of (daily) ground water model output to the (yearly) vegetation model around the root zone; and aggregation of daily output of the soil hydrology model into yearly input of a so called ‘mineralization reduction factor’ (calculated from annual average soil pH and daily soil hydrology) in the soil chemistry model. Overall, the method for critical appraisal of model assumptions presented and tested in this paper yields a rich qualitative insight in model uncertainty and model quality. It promotes reflectivity and learning in the modelling community, and leads to well informed recommendations for model improvement.

  6. Choosing the appropriate forecasting model for predictive parameter control.

    PubMed

    Aleti, Aldeida; Moser, Irene; Meedeniya, Indika; Grunske, Lars

    2014-01-01

    All commonly used stochastic optimisation algorithms have to be parameterised to perform effectively. Adaptive parameter control (APC) is an effective method used for this purpose. APC repeatedly adjusts parameter values during the optimisation process for optimal algorithm performance. The assignment of parameter values for a given iteration is based on previously measured performance. In recent research, time series prediction has been proposed as a method of projecting the probabilities to use for parameter value selection. In this work, we examine the suitability of a variety of prediction methods for the projection of future parameter performance based on previous data. All considered prediction methods have assumptions the time series data has to conform to for the prediction method to provide accurate projections. Looking specifically at parameters of evolutionary algorithms (EAs), we find that all standard EA parameters with the exception of population size conform largely to the assumptions made by the considered prediction methods. Evaluating the performance of these prediction methods, we find that linear regression provides the best results by a very small and statistically insignificant margin. Regardless of the prediction method, predictive parameter control outperforms state of the art parameter control methods when the performance data adheres to the assumptions made by the prediction method. When a parameter's performance data does not adhere to the assumptions made by the forecasting method, the use of prediction does not have a notable adverse impact on the algorithm's performance.

  7. The Impact of Multiple Endpoint Dependency on "Q" and "I"[superscript 2] in Meta-Analysis

    ERIC Educational Resources Information Center

    Thompson, Christopher Glen; Becker, Betsy Jane

    2014-01-01

    A common assumption in meta-analysis is that effect sizes are independent. When correlated effect sizes are analyzed using traditional univariate techniques, this assumption is violated. This research assesses the impact of dependence arising from treatment-control studies with multiple endpoints on homogeneity measures "Q" and…

  8. What are the Starting Points? Evaluating Base-Year Assumptions in the Asian Modeling Exercise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chaturvedi, Vaibhav; Waldhoff, Stephanie; Clarke, Leon E.

    2012-12-01

    A common feature of model inter-comparison efforts is that the base year numbers for important parameters such as population and GDP can differ substantially across models. This paper explores the sources and implications of this variation in Asian countries across the models participating in the Asian Modeling Exercise (AME). Because the models do not all have a common base year, each team was required to provide data for 2005 for comparison purposes. This paper compares the year 2005 information for different models, noting the degree of variation in important parameters, including population, GDP, primary energy, electricity, and CO2 emissions. Itmore » then explores the difference in these key parameters across different sources of base-year information. The analysis confirms that the sources provide different values for many key parameters. This variation across data sources and additional reasons why models might provide different base-year numbers, including differences in regional definitions, differences in model base year, and differences in GDP transformation methodologies, are then discussed in the context of the AME scenarios. Finally, the paper explores the implications of base-year variation on long-term model results.« less

  9. Modeling and simulation of different and representative engineering problems using Network Simulation Method

    PubMed Central

    2018-01-01

    Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model. PMID:29518121

  10. Modeling and simulation of different and representative engineering problems using Network Simulation Method.

    PubMed

    Sánchez-Pérez, J F; Marín, F; Morales, J L; Cánovas, M; Alhama, F

    2018-01-01

    Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model.

  11. Molding of Plasmonic Resonances in Metallic Nanostructures: Dependence of the Non-Linear Electric Permittivity on System Size and Temperature

    PubMed Central

    Alabastri, Alessandro; Tuccio, Salvatore; Giugni, Andrea; Toma, Andrea; Liberale, Carlo; Das, Gobind; De Angelis, Francesco; Di Fabrizio, Enzo; Zaccaria, Remo Proietti

    2013-01-01

    In this paper, we review the principal theoretical models through which the dielectric function of metals can be described. Starting from the Drude assumptions for intraband transitions, we show how this model can be improved by including interband absorption and temperature effect in the damping coefficients. Electronic scattering processes are described and included in the dielectric function, showing their role in determining plasmon lifetime at resonance. Relationships among permittivity, electric conductivity and refractive index are examined. Finally, a temperature dependent permittivity model is presented and is employed to predict temperature and non-linear field intensity dependence on commonly used plasmonic geometries, such as nanospheres. PMID:28788366

  12. Will Organic Synthesis Within Icy Grains or on Dust Surfaces in the Primitive Solar Nebula Completely Erase the Effects of Photochemical Self Shielding?

    NASA Technical Reports Server (NTRS)

    Nuth, Joseph A., III; Johnson, Natasha M.

    2012-01-01

    There are at least 3 separate photochemical self-shielding models with different degrees of commonality. All of these models rely on the selective absorption of (12))C(16)O dissociative photons as the radiation source penetrates through the gas allowing the production of reactive O-17 and O-18 atoms within a specific volume. Each model also assumes that the undissociated C(16)O is stable and does not participate in the chemistry of nebular dust grains. In what follows we will argue that this last, very important assumption is simply not true despite the very high energy of the CO molecular bond.

  13. Re-Thinking the Use of the OML Model in Electric-Sail Development

    NASA Technical Reports Server (NTRS)

    Stone, Nobie H.

    2016-01-01

    The Orbit Motion Limited (OML) model commonly forms the basis for calculations made to determine the effect of the long, biased wires of an Electric Sail on solar wind protons and electrons (which determines the thrust generated and the required operating power). A new analysis of the results of previously conducted ground-based experimental studies of spacecraft-space plasma interactions indicate that the expected thrust created by deflected solar wind protons and the current of collected solar wind electrons could be considerably higher than the OML model would suggest. Herein the experimental analysis will be summarized and the assumptions and approximations required to derive the OML equation-and the limitations they impose-will be considered.

  14. Simple linear and multivariate regression models.

    PubMed

    Rodríguez del Águila, M M; Benítez-Parejo, N

    2011-01-01

    In biomedical research it is common to find problems in which we wish to relate a response variable to one or more variables capable of describing the behaviour of the former variable by means of mathematical models. Regression techniques are used to this effect, in which an equation is determined relating the two variables. While such equations can have different forms, linear equations are the most widely used form and are easy to interpret. The present article describes simple and multiple linear regression models, how they are calculated, and how their applicability assumptions are checked. Illustrative examples are provided, based on the use of the freely accessible R program. Copyright © 2011 SEICAP. Published by Elsevier Espana. All rights reserved.

  15. Flexible modeling improves assessment of prognostic value of C-reactive protein in advanced non-small cell lung cancer.

    PubMed

    Gagnon, B; Abrahamowicz, M; Xiao, Y; Beauchamp, M-E; MacDonald, N; Kasymjanova, G; Kreisman, H; Small, D

    2010-03-30

    C-reactive protein (CRP) is gaining credibility as a prognostic factor in different cancers. Cox's proportional hazard (PH) model is usually used to assess prognostic factors. However, this model imposes a priori assumptions, which are rarely tested, that (1) the hazard ratio associated with each prognostic factor remains constant across the follow-up (PH assumption) and (2) the relationship between a continuous predictor and the logarithm of the mortality hazard is linear (linearity assumption). We tested these two assumptions of the Cox's PH model for CRP, using a flexible statistical model, while adjusting for other known prognostic factors, in a cohort of 269 patients newly diagnosed with non-small cell lung cancer (NSCLC). In the Cox's PH model, high CRP increased the risk of death (HR=1.11 per each doubling of CRP value, 95% CI: 1.03-1.20, P=0.008). However, both the PH assumption (P=0.033) and the linearity assumption (P=0.015) were rejected for CRP, measured at the initiation of chemotherapy, which kept its prognostic value for approximately 18 months. Our analysis shows that flexible modeling provides new insights regarding the value of CRP as a prognostic factor in NSCLC and that Cox's PH model underestimates early risks associated with high CRP.

  16. Modeling Endovascular Coils as Heterogeneous Porous Media

    NASA Astrophysics Data System (ADS)

    Yadollahi Farsani, H.; Herrmann, M.; Chong, B.; Frakes, D.

    2016-12-01

    Minimally invasive surgeries are the stat-of-the-art treatments for many pathologies. Treating brain aneurysms is no exception; invasive neurovascular clipping is no longer the only option and endovascular coiling has introduced itself as the most common treatment. Coiling isolates the aneurysm from blood circulation by promoting thrombosis within the aneurysm. One approach to studying intra-aneurysmal hemodynamics consists of virtually deploying finite element coil models and then performing computational fluid dynamics. However, this approach is often computationally expensive and requires extensive resources to perform. The porous medium approach has been considered as an alternative to the conventional coil modeling approach because it lessens the complexities of computational fluid dynamics simulations by reducing the number of mesh elements needed to discretize the domain. There have been a limited number of attempts at treating the endovascular coils as homogeneous porous media. However, the heterogeneity associated with coil configurations requires a more accurately defined porous medium in which the porosity and permeability change throughout the domain. We implemented this approach by introducing a lattice of sample volumes and utilizing techniques available in the field of interactive computer graphics. We observed that the introduction of the heterogeneity assumption was associated with significant changes in simulated aneurysmal flow velocities as compared to the homogeneous assumption case. Moreover, as the sample volume size was decreased, the flow velocities approached an asymptotical value, showing the importance of the sample volume size selection. These results demonstrate that the homogeneous assumption for porous media that are inherently heterogeneous can lead to considerable errors. Additionally, this modeling approach allowed us to simulate post-treatment flows without considering the explicit geometry of a deployed endovascular coil mass, greatly simplifying computation.

  17. Greening the Grid: Advances in Production Cost Modeling for India Renewable Energy Grid Integration Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cochran, Jaquelin; Palchak, David

    The Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid study uses advanced weather and power system modeling to explore the operational impacts of meeting India's 2022 renewable energy targets and identify actions that may be favorable for integrating high levels of renewable energy into the Indian grid. The study relies primarily on a production cost model that simulates optimal scheduling and dispatch of available generation in a future year (2022) by minimizing total production costs subject to physical, operational, and market constraints. This fact sheet provides a detailed look at each of thesemore » models, including their common assumptions and the insights provided by each.« less

  18. Models of stochastic gene expression

    NASA Astrophysics Data System (ADS)

    Paulsson, Johan

    2005-06-01

    Gene expression is an inherently stochastic process: Genes are activated and inactivated by random association and dissociation events, transcription is typically rare, and many proteins are present in low numbers per cell. The last few years have seen an explosion in the stochastic modeling of these processes, predicting protein fluctuations in terms of the frequencies of the probabilistic events. Here I discuss commonalities between theoretical descriptions, focusing on a gene-mRNA-protein model that includes most published studies as special cases. I also show how expression bursts can be explained as simplistic time-averaging, and how generic approximations can allow for concrete interpretations without requiring concrete assumptions. Measures and nomenclature are discussed to some extent and the modeling literature is briefly reviewed.

  19. Statistical power to detect violation of the proportional hazards assumption when using the Cox regression model.

    PubMed

    Austin, Peter C

    2018-01-01

    The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest.

  20. Statistical power to detect violation of the proportional hazards assumption when using the Cox regression model

    PubMed Central

    Austin, Peter C.

    2017-01-01

    The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest. PMID:29321694

  1. Correlation not Causation: The Relationship between Personality Traits and Political Ideologies

    PubMed Central

    Verhulst, Brad; Eaves, Lindon J.; Hatemi, Peter K.

    2013-01-01

    The assumption in the personality and politics literature is that a person's personality motivates them to develop certain political attitudes later in life. This assumption is founded on the simple correlation between the two constructs and the observation that personality traits are genetically influenced and develop in infancy, whereas political preferences develop later in life. Work in psychology, behavioral genetics, and recently political science, however, has demonstrated that political preferences also develop in childhood and are equally influenced by genetic factors. These findings cast doubt on the assumed causal relationship between personality and politics. Here we test the causal relationship between personality traits and political attitudes using a direction of causation structural model on a genetically informative sample. The results suggest that personality traits do not cause people to develop political attitudes; rather, the correlation between the two is a function of an innate common underlying genetic factor. PMID:22400142

  2. Discerning strain effects in microbial dose-response data.

    PubMed

    Coleman, Margaret E; Marks, Harry M; Golden, Neal J; Latimer, Heejeong K

    In order to estimate the risk or probability of adverse events in risk assessment, it is necessary to identify the important variables that contribute to the risk and provide descriptions of distributions of these variables for well-defined populations. One component of modeling dose response that can create uncertainty is the inherent genetic variability among pathogenic bacteria. For many microbial risk assessments, the "default" assumption used for dose response does not account for strain or serotype variability in pathogenicity and virulence, other than perhaps, recognizing the existence of avirulent strains. However, an examination of data sets from human clinical trials in which Salmonella spp. and Campylobacter jejuni strains were administered reveals significant strain differences. This article discusses the evidence for strain variability and concludes that more biologically based alternatives are necessary to replace the default assumptions commonly used in microbial risk assessment, specifically regarding strain variability.

  3. Correlation not causation: the relationship between personality traits and political ideologies.

    PubMed

    Verhulst, Brad; Eaves, Lindon J; Hatemi, Peter K

    2012-01-01

    The assumption in the personality and politics literature is that a person's personality motivates them to develop certain political attitudes later in life. This assumption is founded on the simple correlation between the two constructs and the observation that personality traits are genetically influenced and develop in infancy, whereas political preferences develop later in life. Work in psychology, behavioral genetics, and recently political science, however, has demonstrated that political preferences also develop in childhood and are equally influenced by genetic factors. These findings cast doubt on the assumed causal relationship between personality and politics. Here we test the causal relationship between personality traits and political attitudes using a direction of causation structural model on a genetically informative sample. The results suggest that personality traits do not cause people to develop political attitudes; rather, the correlation between the two is a function of an innate common underlying genetic factor.

  4. Comprehensive model for predicting elemental composition of coal pyrolysis products

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ricahrds, Andrew P.; Shutt, Tim; Fletcher, Thomas H.

    Large-scale coal combustion simulations depend highly on the accuracy and utility of the physical submodels used to describe the various physical behaviors of the system. Coal combustion simulations depend on the particle physics to predict product compositions, temperatures, energy outputs, and other useful information. The focus of this paper is to improve the accuracy of devolatilization submodels, to be used in conjunction with other particle physics models. Many large simulations today rely on inaccurate assumptions about particle compositions, including that the volatiles that are released during pyrolysis are of the same elemental composition as the char particle. Another common assumptionmore » is that the char particle can be approximated by pure carbon. These assumptions will lead to inaccuracies in the overall simulation. There are many factors that influence pyrolysis product composition, including parent coal composition, pyrolysis conditions (including particle temperature history and heating rate), and others. All of these factors are incorporated into the correlations to predict the elemental composition of the major pyrolysis products, including coal tar, char, and light gases.« less

  5. Near Real-Time Optimal Prediction of Adverse Events in Aviation Data

    NASA Technical Reports Server (NTRS)

    Martin, Rodney Alexander; Das, Santanu

    2010-01-01

    The prediction of anomalies or adverse events is a challenging task, and there are a variety of methods which can be used to address the problem. In this paper, we demonstrate how to recast the anomaly prediction problem into a form whose solution is accessible as a level-crossing prediction problem. The level-crossing prediction problem has an elegant, optimal, yet untested solution under certain technical constraints, and only when the appropriate modeling assumptions are made. As such, we will thoroughly investigate the resilience of these modeling assumptions, and show how they affect final performance. Finally, the predictive capability of this method will be assessed by quantitative means, using both validation and test data containing anomalies or adverse events from real aviation data sets that have previously been identified as operationally significant by domain experts. It will be shown that the formulation proposed yields a lower false alarm rate on average than competing methods based on similarly advanced concepts, and a higher correct detection rate than a standard method based upon exceedances that is commonly used for prediction.

  6. Using "Excel" for White's Test--An Important Technique for Evaluating the Equality of Variance Assumption and Model Specification in a Regression Analysis

    ERIC Educational Resources Information Center

    Berenson, Mark L.

    2013-01-01

    There is consensus in the statistical literature that severe departures from its assumptions invalidate the use of regression modeling for purposes of inference. The assumptions of regression modeling are usually evaluated subjectively through visual, graphic displays in a residual analysis but such an approach, taken alone, may be insufficient…

  7. Modeling axisymmetric flow and transport

    USGS Publications Warehouse

    Langevin, C.D.

    2008-01-01

    Unmodified versions of common computer programs such as MODFLOW, MT3DMS, and SEAWAT that use Cartesian geometry can accurately simulate axially symmetric ground water flow and solute transport. Axisymmetric flow and transport are simulated by adjusting several input parameters to account for the increase in flow area with radial distance from the injection or extraction well. Logarithmic weighting of interblock transmissivity, a standard option in MODFLOW, can be used for axisymmetric models to represent the linear change in hydraulic conductance within a single finite-difference cell. Results from three test problems (ground water extraction, an aquifer push-pull test, and upconing of saline water into an extraction well) show good agreement with analytical solutions or with results from other numerical models designed specifically to simulate the axisymmetric geometry. Axisymmetric models are not commonly used but can offer an efficient alternative to full three-dimensional models, provided the assumption of axial symmetry can be justified. For the upconing problem, the axisymmetric model was more than 1000 times faster than an equivalent three-dimensional model. Computational gains with the axisymmetric models may be useful for quickly determining appropriate levels of grid resolution for three-dimensional models and for estimating aquifer parameters from field tests.

  8. Non-ignorable missingness in logistic regression.

    PubMed

    Wang, Joanna J J; Bartlett, Mark; Ryan, Louise

    2017-08-30

    Nonresponses and missing data are common in observational studies. Ignoring or inadequately handling missing data may lead to biased parameter estimation, incorrect standard errors and, as a consequence, incorrect statistical inference and conclusions. We present a strategy for modelling non-ignorable missingness where the probability of nonresponse depends on the outcome. Using a simple case of logistic regression, we quantify the bias in regression estimates and show the observed likelihood is non-identifiable under non-ignorable missing data mechanism. We then adopt a selection model factorisation of the joint distribution as the basis for a sensitivity analysis to study changes in estimated parameters and the robustness of study conclusions against different assumptions. A Bayesian framework for model estimation is used as it provides a flexible approach for incorporating different missing data assumptions and conducting sensitivity analysis. Using simulated data, we explore the performance of the Bayesian selection model in correcting for bias in a logistic regression. We then implement our strategy using survey data from the 45 and Up Study to investigate factors associated with worsening health from the baseline to follow-up survey. Our findings have practical implications for the use of the 45 and Up Study data to answer important research questions relating to health and quality-of-life. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  9. Discriminative Relational Topic Models.

    PubMed

    Chen, Ning; Zhu, Jun; Xia, Fei; Zhang, Bo

    2015-05-01

    Relational topic models (RTMs) provide a probabilistic generative process to describe both the link structure and document contents for document networks, and they have shown promise on predicting network structures and discovering latent topic representations. However, existing RTMs have limitations in both the restricted model expressiveness and incapability of dealing with imbalanced network data. To expand the scope and improve the inference accuracy of RTMs, this paper presents three extensions: 1) unlike the common link likelihood with a diagonal weight matrix that allows the-same-topic interactions only, we generalize it to use a full weight matrix that captures all pairwise topic interactions and is applicable to asymmetric networks; 2) instead of doing standard Bayesian inference, we perform regularized Bayesian inference (RegBayes) with a regularization parameter to deal with the imbalanced link structure issue in real networks and improve the discriminative ability of learned latent representations; and 3) instead of doing variational approximation with strict mean-field assumptions, we present collapsed Gibbs sampling algorithms for the generalized relational topic models by exploring data augmentation without making restricting assumptions. Under the generic RegBayes framework, we carefully investigate two popular discriminative loss functions, namely, the logistic log-loss and the max-margin hinge loss. Experimental results on several real network datasets demonstrate the significance of these extensions on improving prediction performance.

  10. What lies behind crop decisions?Coming to terms with revealing farmers' preferences

    NASA Astrophysics Data System (ADS)

    Gomez, C.; Gutierrez, C.; Pulido-Velazquez, M.; López Nicolás, A.

    2016-12-01

    The paper offers a fully-fledged applied revealed preference methodology to screen and represent farmers' choices as the solution of an optimal program involving trade-offs among the alternative welfare outcomes of crop decisions such as profits, income security and management easiness. The recursive two-stage method is proposed as an alternative to cope with the methodological problems inherent to common practice positive mathematical program methodologies (PMP). Differently from PMP, in the model proposed in this paper, the non-linear costs that are required for both calibration and smooth adjustment are not at odds with the assumptions of linear Leontief technologies and fixed crop prices and input costs. The method frees the model from ad-hoc assumptions about costs and then recovers the potential of economic analysis as a means to understand the rationale behind observed and forecasted farmers' decisions and then to enhance the potential of the model to support policy making in relevant domains such as agricultural policy, water management, risk management and climate change adaptation. After the introduction, where the methodological drawbacks and challenges are set up, section two presents the theoretical model, section three develops its empirical application and presents its implementation to a Spanish irrigation district and finally section four concludes and makes suggestions for further research.

  11. Influence of model assumptions about HIV disease progression after initiating or stopping treatment on estimates of infections and deaths averted by scaling up antiretroviral therapy

    PubMed Central

    Sucharitakul, Kanes; Boily, Marie-Claude; Dimitrov, Dobromir

    2018-01-01

    Background Many mathematical models have investigated the population-level impact of expanding antiretroviral therapy (ART), using different assumptions about HIV disease progression on ART and among ART dropouts. We evaluated the influence of these assumptions on model projections of the number of infections and deaths prevented by expanded ART. Methods A new dynamic model of HIV transmission among men who have sex with men (MSM) was developed, which incorporated each of four alternative assumptions about disease progression used in previous models: (A) ART slows disease progression; (B) ART halts disease progression; (C) ART reverses disease progression by increasing CD4 count; (D) ART reverses disease progression, but disease progresses rapidly once treatment is stopped. The model was independently calibrated to HIV prevalence and ART coverage data from the United States under each progression assumption in turn. New HIV infections and HIV-related deaths averted over 10 years were compared for fixed ART coverage increases. Results Little absolute difference (<7 percentage points (pp)) in HIV infections averted over 10 years was seen between progression assumptions for the same increases in ART coverage (varied between 33% and 90%) if ART dropouts reinitiated ART at the same rate as ART-naïve MSM. Larger differences in the predicted fraction of HIV-related deaths averted were observed (up to 15pp). However, if ART dropouts could only reinitiate ART at CD4<200 cells/μl, assumption C predicted substantially larger fractions of HIV infections and deaths averted than other assumptions (up to 20pp and 37pp larger, respectively). Conclusion Different disease progression assumptions on and post-ART interruption did not affect the fraction of HIV infections averted with expanded ART, unless ART dropouts only re-initiated ART at low CD4 counts. Different disease progression assumptions had a larger influence on the fraction of HIV-related deaths averted with expanded ART. PMID:29554136

  12. Assumption Trade-Offs When Choosing Identification Strategies for Pre-Post Treatment Effect Estimation: An Illustration of a Community-Based Intervention in Madagascar.

    PubMed

    Weber, Ann M; van der Laan, Mark J; Petersen, Maya L

    2015-03-01

    Failure (or success) in finding a statistically significant effect of a large-scale intervention may be due to choices made in the evaluation. To highlight the potential limitations and pitfalls of some common identification strategies used for estimating causal effects of community-level interventions, we apply a roadmap for causal inference to a pre-post evaluation of a national nutrition program in Madagascar. Selection into the program was non-random and strongly associated with the pre-treatment (lagged) outcome. Using structural causal models (SCM), directed acyclic graphs (DAGs) and simulated data, we illustrate that an estimand with the outcome defined as the post-treatment outcome controls for confounding by the lagged outcome but not by possible unmeasured confounders. Two separate differencing estimands (of the pre- and post-treatment outcome) have the potential to adjust for a certain type of unmeasured confounding, but introduce bias if the additional identification assumptions they rely on are not met. In order to illustrate the practical impact of choice between three common identification strategies and their corresponding estimands, we used observational data from the community nutrition program in Madagascar to estimate each of these three estimands. Specifically, we estimated the average treatment effect of the program on the community mean nutritional status of children 5 years and under and found that the estimate based on the post-treatment estimand was about a quarter of the magnitude of either of the differencing estimands (0.066 SD vs. 0.26-0.27 SD increase in mean weight-for-age z-score). Choice of estimand clearly has important implications for the interpretation of the success of the program to improve nutritional status of young children. A careful appraisal of the assumptions underlying the causal model is imperative before committing to a statistical model and progressing to estimation. However, knowledge about the data-generating process must be sufficient in order to choose the identification strategy that gets us closest to the truth.

  13. A continuous-time adaptive particle filter for estimations under measurement time uncertainties with an application to a plasma-leucine mixed effects model

    PubMed Central

    2013-01-01

    Background When mathematical modelling is applied to many different application areas, a common task is the estimation of states and parameters based on measurements. With this kind of inference making, uncertainties in the time when the measurements have been taken are often neglected, but especially in applications taken from the life sciences, this kind of errors can considerably influence the estimation results. As an example in the context of personalized medicine, the model-based assessment of the effectiveness of drugs is becoming to play an important role. Systems biology may help here by providing good pharmacokinetic and pharmacodynamic (PK/PD) models. Inference on these systems based on data gained from clinical studies with several patient groups becomes a major challenge. Particle filters are a promising approach to tackle these difficulties but are by itself not ready to handle uncertainties in measurement times. Results In this article, we describe a variant of the standard particle filter (PF) algorithm which allows state and parameter estimation with the inclusion of measurement time uncertainties (MTU). The modified particle filter, which we call MTU-PF, also allows the application of an adaptive stepsize choice in the time-continuous case to avoid degeneracy problems. The modification is based on the model assumption of uncertain measurement times. While the assumption of randomness in the measurements themselves is common, the corresponding measurement times are generally taken as deterministic and exactly known. Especially in cases where the data are gained from measurements on blood or tissue samples, a relatively high uncertainty in the true measurement time seems to be a natural assumption. Our method is appropriate in cases where relatively few data are used from a relatively large number of groups or individuals, which introduce mixed effects in the model. This is a typical setting of clinical studies. We demonstrate the method on a small artificial example and apply it to a mixed effects model of plasma-leucine kinetics with data from a clinical study which included 34 patients. Conclusions Comparisons of our MTU-PF with the standard PF and with an alternative Maximum Likelihood estimation method on the small artificial example clearly show that the MTU-PF obtains better estimations. Considering the application to the data from the clinical study, the MTU-PF shows a similar performance with respect to the quality of estimated parameters compared with the standard particle filter, but besides that, the MTU algorithm shows to be less prone to degeneration than the standard particle filter. PMID:23331521

  14. The Teacher, the Physician and the Person: Exploring Causal Connections between Teaching Performance and Role Model Types Using Directed Acyclic Graphs

    PubMed Central

    Boerebach, Benjamin C. M.; Lombarts, Kiki M. J. M. H.; Scherpbier, Albert J. J.; Arah, Onyebuchi A.

    2013-01-01

    Background In fledgling areas of research, evidence supporting causal assumptions is often scarce due to the small number of empirical studies conducted. In many studies it remains unclear what impact explicit and implicit causal assumptions have on the research findings; only the primary assumptions of the researchers are often presented. This is particularly true for research on the effect of faculty’s teaching performance on their role modeling. Therefore, there is a need for robust frameworks and methods for transparent formal presentation of the underlying causal assumptions used in assessing the causal effects of teaching performance on role modeling. This study explores the effects of different (plausible) causal assumptions on research outcomes. Methods This study revisits a previously published study about the influence of faculty’s teaching performance on their role modeling (as teacher-supervisor, physician and person). We drew eight directed acyclic graphs (DAGs) to visually represent different plausible causal relationships between the variables under study. These DAGs were subsequently translated into corresponding statistical models, and regression analyses were performed to estimate the associations between teaching performance and role modeling. Results The different causal models were compatible with major differences in the magnitude of the relationship between faculty’s teaching performance and their role modeling. Odds ratios for the associations between teaching performance and the three role model types ranged from 31.1 to 73.6 for the teacher-supervisor role, from 3.7 to 15.5 for the physician role, and from 2.8 to 13.8 for the person role. Conclusions Different sets of assumptions about causal relationships in role modeling research can be visually depicted using DAGs, which are then used to guide both statistical analysis and interpretation of results. Since study conclusions can be sensitive to different causal assumptions, results should be interpreted in the light of causal assumptions made in each study. PMID:23936020

  15. Three-class ROC analysis--the equal error utility assumption and the optimality of three-class ROC surface using the ideal observer.

    PubMed

    He, Xin; Frey, Eric C

    2006-08-01

    Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.

  16. Expert assessment concludes negative emissions scenarios may not deliver

    NASA Astrophysics Data System (ADS)

    Vaughan, Naomi E.; Gough, Clair

    2016-09-01

    Many integrated assessment models (IAMs) rely on the availability and extensive use of biomass energy with carbon capture and storage (BECCS) to deliver emissions scenarios consistent with limiting climate change to below 2 °C average temperature rise. BECCS has the potential to remove carbon dioxide (CO2) from the atmosphere, delivering ‘negative emissions’. The deployment of BECCS at the scale assumed in IAM scenarios is highly uncertain: biomass energy is commonly used but not at such a scale, and CCS technologies have been demonstrated but not commercially established. Here we present the results of an expert elicitation process that explores the explicit and implicit assumptions underpinning the feasibility of BECCS in IAM scenarios. Our results show that the assumptions are considered realistic regarding technical aspects of CCS but unrealistic regarding the extent of bioenergy deployment, and development of adequate societal support and governance structures for BECCS. The results highlight concerns about the assumed magnitude of carbon dioxide removal achieved across a full BECCS supply chain, with the greatest uncertainty in bioenergy production. Unrealistically optimistic assumptions regarding the future availability of BECCS in IAM scenarios could lead to the overshoot of critical warming limits and have significant impacts on near-term mitigation options.

  17. Zones of consensus and zones of conflict: questioning the "common morality" presumption in bioethics.

    PubMed

    Turner, Leigh

    2003-09-01

    Many bioethicists assume that morality is in a state of wide reflective equilibrium. According to this model of moral deliberation, public policymaking can build upon a core common morality that is pretheoretical and provides a basis for practical reasoning. Proponents of the common morality approach to moral deliberation make three assumptions that deserve to be viewed with skepticism. First, they commonly assume that there is a universal, transhistorical common morality that can serve as a normative baseline for judging various actions and practices. Second, advocates of the common morality approach assume that the common morality is in a state of relatively stable, ordered, wide reflective equilibrium. Third, casuists, principlists, and other proponents of common morality approaches assume that the common morality can serve as a basis for the specification of particular policies and practical recommendations. These three claims fail to recognize the plural moral traditions that are found in multicultural, multiethnic, multifaith societies such as the United States and Canada. A more realistic recognition of multiple moral traditions in pluralist societies would be considerable more skeptical about the contributions that common morality approaches in bioethics can make to resolving contentious moral issues.

  18. Plant ecosystem responses to rising atmospheric CO2: applying a "two-timing" approach to assess alternative hypotheses for mechanisms of nutrient limitation

    NASA Astrophysics Data System (ADS)

    Medlyn, B.; Jiang, M.; Zaehle, S.

    2017-12-01

    There is now ample experimental evidence that the response of terrestrial vegetation to rising atmospheric CO2 concentration is modified by soil nutrient availability. How to represent nutrient cycling processes is thus a key consideration for vegetation models. We have previously used model intercomparison to demonstrate that models incorporating different assumptions predict very different responses at Free-Air CO2 Enrichment experiments. Careful examination of model outputs has provided some insight into the reasons for the different model outcomes, but it is difficult to attribute outcomes to specific assumptions. Here we investigate the impact of individual assumptions in a generic plant carbon-nutrient cycling model. The G'DAY (Generic Decomposition And Yield) model is modified to incorporate alternative hypotheses for nutrient cycling. We analyse the impact of these assumptions in the model using a simple analytical approach known as "two-timing". This analysis identifies the quasi-equilibrium behaviour of the model at the time scales of the component pools. The analysis provides a useful mathematical framework for probing model behaviour and identifying the most critical assumptions for experimental study.

  19. Seeping Deficit Thinking Assumptions Maintain the Neoliberal Education Agenda: Exploring Three Conceptual Frameworks of Deficit Thinking in Inner-City Schools

    ERIC Educational Resources Information Center

    Sharma, Manu

    2018-01-01

    This article draws awareness to the subtle and seeping "common sense" mentality of neoliberalism and deficit thinking assumptions about racially marginalized students in inner-city schools. From a literature review conducted on deficit thinking and deficit practices in schools, I developed three different frameworks for understanding the…

  20. Does Home Visiting Benefit Only First-Time Mothers?: Evidence from Healthy Families Virginia

    ERIC Educational Resources Information Center

    Huntington, Lee; Galano, Joseph

    2013-01-01

    It is a common assumption that mothers who have had previous births would participate less fully and have poorer outcomes from early home visitation programs than would first-time mothers. The authors conducted a qualitative and quantitative study to test that assumption by measuring three aspects of participation: time in the program, the number…

  1. An Evaluation of Normal versus Lognormal Distribution in Data Description and Empirical Analysis

    ERIC Educational Resources Information Center

    Diwakar, Rekha

    2017-01-01

    Many existing methods of statistical inference and analysis rely heavily on the assumption that the data are normally distributed. However, the normality assumption is not fulfilled when dealing with data which does not contain negative values or are otherwise skewed--a common occurrence in diverse disciplines such as finance, economics, political…

  2. A brief review of 210Pb sediment dating models and uncertainties in a world of global change

    NASA Astrophysics Data System (ADS)

    Sanchez-Cabeza, J. A.; Ruiz-Fernandez, A. C.

    2016-12-01

    Irrespective of the model names used, assumptions and (usually forgotten) uncertainties, the fact is that 210Pb sediment dating is an increasingly relevant tool in our world of global change. 210Pb dating results are needed to assess historical trends of sea level rise, quantify blue carbon fluxes and reconstruct environmental records of biogeochemical proxies for diverse processes in the aquatic ecosystems (such as ocean acidification, hypoxia and pollution). Although in the past 210Pb profiles departing from "ideal" decay trends were usually discarded, all profiles have useful information. In this work we review the principles and assumptions of the most common 210Pb dating models, and propose a logical formulation and classification of the models. 210Pb dating models provide two kinds of results: chronologies (i.e. age models) and accumulation rates. In many cases, the use of sediment and/or mass accumulation rates (SAR and MAR) is needed to assess environmental fluxes or, simply, to describe changes, such as catchment erosion or saltmarsh accretion. Although uncertainty quadratic propagation is a well-known technique, it requires that all variables are fully independent and requires demanding mathematical expressions which might lead to wrong results. We present here a Monte Carlo method that makes calculation easier and, likely, error-free. Not unexpectedly, the most important uncertainty sources are measurement uncertainties, which impose limitations on common techniques such as gamma spectrometry. 210Pb chronology does not cover all anthropogenic impacts, such as those caused by ancient civilizations, so radiocarbon also plays an important role in this kind of work. We also conceptually revise the limitations of both techniques and encourage scientists to link both dating techniques with a symmetrically open mind. Acknowledgements: projects CONACYT PDCPN2013-01/214349 and CB2010/153492, UNAM PAPIIT-IN203313, PRODEP network "Aquatic contamination: levels and effects" (year 3).

  3. Influence of Climate Change on Flood Hazard using Climate Informed Bayesian Hierarchical Model in Johnson Creek River

    NASA Astrophysics Data System (ADS)

    Zarekarizi, M.; Moradkhani, H.

    2015-12-01

    Extreme events are proven to be affected by climate change, influencing hydrologic simulations for which stationarity is usually a main assumption. Studies have discussed that this assumption would lead to large bias in model estimations and higher flood hazard consequently. Getting inspired by the importance of non-stationarity, we determined how the exceedance probabilities have changed over time in Johnson Creek River, Oregon. This could help estimate the probability of failure of a structure that was primarily designed to resist less likely floods according to common practice. Therefore, we built a climate informed Bayesian hierarchical model and non-stationarity was considered in modeling framework. Principle component analysis shows that North Atlantic Oscillation (NAO), Western Pacific Index (WPI) and Eastern Asia (EA) are mostly affecting stream flow in this river. We modeled flood extremes using peaks over threshold (POT) method rather than conventional annual maximum flood (AMF) mainly because it is possible to base the model on more information. We used available threshold selection methods to select a suitable threshold for the study area. Accounting for non-stationarity, model parameters vary through time with climate indices. We developed a couple of model scenarios and chose one which could best explain the variation in data based on performance measures. We also estimated return periods under non-stationarity condition. Results show that ignoring stationarity could increase the flood hazard up to four times which could increase the probability of an in-stream structure being overtopped.

  4. Collective behaviour in vertebrates: a sensory perspective

    PubMed Central

    Collignon, Bertrand; Fernández-Juricic, Esteban

    2016-01-01

    Collective behaviour models can predict behaviours of schools, flocks, and herds. However, in many cases, these models make biologically unrealistic assumptions in terms of the sensory capabilities of the organism, which are applied across different species. We explored how sensitive collective behaviour models are to these sensory assumptions. Specifically, we used parameters reflecting the visual coverage and visual acuity that determine the spatial range over which an individual can detect and interact with conspecifics. Using metric and topological collective behaviour models, we compared the classic sensory parameters, typically used to model birds and fish, with a set of realistic sensory parameters obtained through physiological measurements. Compared with the classic sensory assumptions, the realistic assumptions increased perceptual ranges, which led to fewer groups and larger group sizes in all species, and higher polarity values and slightly shorter neighbour distances in the fish species. Overall, classic visual sensory assumptions are not representative of many species showing collective behaviour and constrain unrealistically their perceptual ranges. More importantly, caution must be exercised when empirically testing the predictions of these models in terms of choosing the model species, making realistic predictions, and interpreting the results. PMID:28018616

  5. Review of methods for handling confounding by cluster and informative cluster size in clustered data

    PubMed Central

    Seaman, Shaun; Pavlou, Menelaos; Copas, Andrew

    2014-01-01

    Clustered data are common in medical research. Typically, one is interested in a regression model for the association between an outcome and covariates. Two complications that can arise when analysing clustered data are informative cluster size (ICS) and confounding by cluster (CBC). ICS and CBC mean that the outcome of a member given its covariates is associated with, respectively, the number of members in the cluster and the covariate values of other members in the cluster. Standard generalised linear mixed models for cluster-specific inference and standard generalised estimating equations for population-average inference assume, in general, the absence of ICS and CBC. Modifications of these approaches have been proposed to account for CBC or ICS. This article is a review of these methods. We express their assumptions in a common format, thus providing greater clarity about the assumptions that methods proposed for handling CBC make about ICS and vice versa, and about when different methods can be used in practice. We report relative efficiencies of methods where available, describe how methods are related, identify a previously unreported equivalence between two key methods, and propose some simple additional methods. Unnecessarily using a method that allows for ICS/CBC has an efficiency cost when ICS and CBC are absent. We review tools for identifying ICS/CBC. A strategy for analysis when CBC and ICS are suspected is demonstrated by examining the association between socio-economic deprivation and preterm neonatal death in Scotland. PMID:25087978

  6. Evaluation of an unsteady flamelet progress variable model for autoignition and flame development in compositionally stratified mixtures

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Saumyadip; Abraham, John

    2012-07-01

    The unsteady flamelet progress variable (UFPV) model has been proposed by Pitsch and Ihme ["An unsteady/flamelet progress variable method for LES of nonpremixed turbulent combustion," AIAA Paper No. 2005-557, 2005] for modeling the averaged/filtered chemistry source terms in Reynolds averaged simulations and large eddy simulations of reacting non-premixed combustion. In the UFPV model, a look-up table of source terms is generated as a function of mixture fraction Z, scalar dissipation rate χ, and progress variable C by solving the unsteady flamelet equations. The assumption is that the unsteady flamelet represents the evolution of the reacting mixing layer in the non-premixed flame. We assess the accuracy of the model in predicting autoignition and flame development in compositionally stratified n-heptane/air mixtures using direct numerical simulations (DNS). The focus in this work is primarily on the assessment of accuracy of the probability density functions (PDFs) employed for obtaining averaged source terms. The performance of commonly employed presumed functions, such as the dirac-delta distribution function, the β distribution function, and statistically most likely distribution (SMLD) approach in approximating the shapes of the PDFs of the reactive and the conserved scalars is evaluated. For unimodal distributions, it is observed that functions that need two-moment information, e.g., the β distribution function and the SMLD approach with two-moment closure, are able to reasonably approximate the actual PDF. As the distribution becomes multimodal, higher moment information is required. Differences are observed between the ignition trends obtained from DNS and those predicted by the look-up table, especially for smaller gradients where the flamelet assumption becomes less applicable. The formulation assumes that the shape of the χ(Z) profile can be modeled by an error function which remains unchanged in the presence of heat release. We show that this assumption is not accurate.

  7. Behavioral modeling of human choices reveals dissociable effects of physical effort and temporal delay on reward devaluation.

    PubMed

    Klein-Flügge, Miriam C; Kennerley, Steven W; Saraiva, Ana C; Penny, Will D; Bestmann, Sven

    2015-03-01

    There has been considerable interest from the fields of biology, economics, psychology, and ecology about how decision costs decrease the value of rewarding outcomes. For example, formal descriptions of how reward value changes with increasing temporal delays allow for quantifying individual decision preferences, as in animal species populating different habitats, or normal and clinical human populations. Strikingly, it remains largely unclear how humans evaluate rewards when these are tied to energetic costs, despite the surge of interest in the neural basis of effort-guided decision-making and the prevalence of disorders showing a diminished willingness to exert effort (e.g., depression). One common assumption is that effort discounts reward in a similar way to delay. Here we challenge this assumption by formally comparing competing hypotheses about effort and delay discounting. We used a design specifically optimized to compare discounting behavior for both effort and delay over a wide range of decision costs (Experiment 1). We then additionally characterized the profile of effort discounting free of model assumptions (Experiment 2). Contrary to previous reports, in both experiments effort costs devalued reward in a manner opposite to delay, with small devaluations for lower efforts, and progressively larger devaluations for higher effort-levels (concave shape). Bayesian model comparison confirmed that delay-choices were best predicted by a hyperbolic model, with the largest reward devaluations occurring at shorter delays. In contrast, an altogether different relationship was observed for effort-choices, which were best described by a model of inverse sigmoidal shape that is initially concave. Our results provide a novel characterization of human effort discounting behavior and its first dissociation from delay discounting. This enables accurate modelling of cost-benefit decisions, a prerequisite for the investigation of the neural underpinnings of effort-guided choice and for understanding the deficits in clinical disorders characterized by behavioral inactivity.

  8. Behavioral Modeling of Human Choices Reveals Dissociable Effects of Physical Effort and Temporal Delay on Reward Devaluation

    PubMed Central

    Klein-Flügge, Miriam C.; Kennerley, Steven W.; Saraiva, Ana C.; Penny, Will D.; Bestmann, Sven

    2015-01-01

    There has been considerable interest from the fields of biology, economics, psychology, and ecology about how decision costs decrease the value of rewarding outcomes. For example, formal descriptions of how reward value changes with increasing temporal delays allow for quantifying individual decision preferences, as in animal species populating different habitats, or normal and clinical human populations. Strikingly, it remains largely unclear how humans evaluate rewards when these are tied to energetic costs, despite the surge of interest in the neural basis of effort-guided decision-making and the prevalence of disorders showing a diminished willingness to exert effort (e.g., depression). One common assumption is that effort discounts reward in a similar way to delay. Here we challenge this assumption by formally comparing competing hypotheses about effort and delay discounting. We used a design specifically optimized to compare discounting behavior for both effort and delay over a wide range of decision costs (Experiment 1). We then additionally characterized the profile of effort discounting free of model assumptions (Experiment 2). Contrary to previous reports, in both experiments effort costs devalued reward in a manner opposite to delay, with small devaluations for lower efforts, and progressively larger devaluations for higher effort-levels (concave shape). Bayesian model comparison confirmed that delay-choices were best predicted by a hyperbolic model, with the largest reward devaluations occurring at shorter delays. In contrast, an altogether different relationship was observed for effort-choices, which were best described by a model of inverse sigmoidal shape that is initially concave. Our results provide a novel characterization of human effort discounting behavior and its first dissociation from delay discounting. This enables accurate modelling of cost-benefit decisions, a prerequisite for the investigation of the neural underpinnings of effort-guided choice and for understanding the deficits in clinical disorders characterized by behavioral inactivity. PMID:25816114

  9. Flexible modeling improves assessment of prognostic value of C-reactive protein in advanced non-small cell lung cancer

    PubMed Central

    Gagnon, B; Abrahamowicz, M; Xiao, Y; Beauchamp, M-E; MacDonald, N; Kasymjanova, G; Kreisman, H; Small, D

    2010-01-01

    Background: C-reactive protein (CRP) is gaining credibility as a prognostic factor in different cancers. Cox's proportional hazard (PH) model is usually used to assess prognostic factors. However, this model imposes a priori assumptions, which are rarely tested, that (1) the hazard ratio associated with each prognostic factor remains constant across the follow-up (PH assumption) and (2) the relationship between a continuous predictor and the logarithm of the mortality hazard is linear (linearity assumption). Methods: We tested these two assumptions of the Cox's PH model for CRP, using a flexible statistical model, while adjusting for other known prognostic factors, in a cohort of 269 patients newly diagnosed with non-small cell lung cancer (NSCLC). Results: In the Cox's PH model, high CRP increased the risk of death (HR=1.11 per each doubling of CRP value, 95% CI: 1.03–1.20, P=0.008). However, both the PH assumption (P=0.033) and the linearity assumption (P=0.015) were rejected for CRP, measured at the initiation of chemotherapy, which kept its prognostic value for approximately 18 months. Conclusion: Our analysis shows that flexible modeling provides new insights regarding the value of CRP as a prognostic factor in NSCLC and that Cox's PH model underestimates early risks associated with high CRP. PMID:20234363

  10. The Mechanisms of Water Exchange: The Regulatory Roles of Multiple Interactions in Social Wasps.

    PubMed

    Agrawal, Devanshu; Karsai, Istvan

    2016-01-01

    Evolutionary benefits of task fidelity and improving information acquisition via multiple transfers of materials between individuals in a task partitioned system have been shown before, but in this paper we provide a mechanistic explanation of these phenomena. Using a simple mathematical model describing the individual interactions of the wasps, we explain the functioning of the common stomach, an information center, which governs construction behavior and task change. Our central hypothesis is a symmetry between foragers who deposit water and foragers who withdraw water into and out of the common stomach. We combine this with a trade-off between acceptance and resistance to water transfer. We ultimately derive a mathematical function that relates the number of interactions that foragers complete with common stomach wasps during a foraging cycle. We use field data and additional model assumptions to calculate values of our model parameters, and we use these to explain why the fullness of the common stomach stabilizes just below 50 percent, why the average number of successful interactions between foragers and the wasps forming the common stomach is between 5 and 7, and why there is a variation in this number of interactions over time. Our explanation is that our proposed water exchange mechanism places natural bounds on the number of successful interactions possible, water exchange is set to optimize mediation of water through the common stomach, and the chance that foragers abort their task prematurely is very low.

  11. The Mechanisms of Water Exchange: The Regulatory Roles of Multiple Interactions in Social Wasps

    PubMed Central

    Agrawal, Devanshu; Karsai, Istvan

    2016-01-01

    Evolutionary benefits of task fidelity and improving information acquisition via multiple transfers of materials between individuals in a task partitioned system have been shown before, but in this paper we provide a mechanistic explanation of these phenomena. Using a simple mathematical model describing the individual interactions of the wasps, we explain the functioning of the common stomach, an information center, which governs construction behavior and task change. Our central hypothesis is a symmetry between foragers who deposit water and foragers who withdraw water into and out of the common stomach. We combine this with a trade-off between acceptance and resistance to water transfer. We ultimately derive a mathematical function that relates the number of interactions that foragers complete with common stomach wasps during a foraging cycle. We use field data and additional model assumptions to calculate values of our model parameters, and we use these to explain why the fullness of the common stomach stabilizes just below 50 percent, why the average number of successful interactions between foragers and the wasps forming the common stomach is between 5 and 7, and why there is a variation in this number of interactions over time. Our explanation is that our proposed water exchange mechanism places natural bounds on the number of successful interactions possible, water exchange is set to optimize mediation of water through the common stomach, and the chance that foragers abort their task prematurely is very low. PMID:26751076

  12. Simulation of particle diversity and mixing state over Greater Paris: a model-measurement inter-comparison.

    PubMed

    Zhu, Shupeng; Sartelet, Karine N; Healy, Robert M; Wenger, John C

    2016-07-18

    Air quality models are used to simulate and forecast pollutant concentrations, from continental scales to regional and urban scales. These models usually assume that particles are internally mixed, i.e. particles of the same size have the same chemical composition, which may vary in space and time. Although this assumption may be realistic for continental-scale simulations, where particles originating from different sources have undergone sufficient mixing to achieve a common chemical composition for a given model grid cell and time, it may not be valid for urban-scale simulations, where particles from different sources interact on shorter time scales. To investigate the role of the mixing state assumption on the formation of particles, a size-composition resolved aerosol model (SCRAM) was developed and coupled to the Polyphemus air quality platform. Two simulations, one with the internal mixing hypothesis and another with the external mixing hypothesis, have been carried out for the period 15 January to 11 February 2010, when the MEGAPOLI winter field measurement campaign took place in Paris. The simulated bulk concentrations of chemical species and the concentrations of individual particle classes are compared with the observations of Healy et al. (Atmos. Chem. Phys., 2013, 13, 9479-9496) for the same period. The single particle diversity and the mixing-state index are computed based on the approach developed by Riemer et al. (Atmos. Chem. Phys., 2013, 13, 11423-11439), and they are compared to the measurement-based analyses of Healy et al. (Atmos. Chem. Phys., 2014, 14, 6289-6299). The average value of the single particle diversity, which represents the average number of species within each particle, is consistent between simulation and measurement (2.91 and 2.79 respectively). Furthermore, the average value of the mixing-state index is also well represented in the simulation (69% against 59% from the measurements). The spatial distribution of the mixing-state index shows that the particles are not mixed in urban areas, while they are well mixed in rural areas. This indicates that the assumption of internal mixing traditionally used in transport chemistry models is well suited to rural areas, but this assumption is less realistic for urban areas close to emission sources.

  13. A model of interval timing by neural integration

    PubMed Central

    Simen, Patrick; Balci, Fuat; deSouza, Laura; Cohen, Jonathan D.; Holmes, Philip

    2011-01-01

    We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes; that correlations among them can be largely cancelled by balancing excitation and inhibition; that neural populations can act as integrators; and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule’s predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior. PMID:21697374

  14. Arctic Ice Dynamics Joint Experiment (AIDJEX) assumptions revisited and found inadequate

    NASA Astrophysics Data System (ADS)

    Coon, Max; Kwok, Ron; Levy, Gad; Pruis, Matthew; Schreyer, Howard; Sulsky, Deborah

    2007-11-01

    This paper revisits the Arctic Ice Dynamics Joint Experiment (AIDJEX) assumptions about pack ice behavior with an eye to modeling sea ice dynamics. The AIDJEX assumptions were that (1) enough leads were present in a 100 km by 100 km region to make the ice isotropic on that scale; (2) the ice had no tensile strength; and (3) the ice behavior could be approximated by an isotropic yield surface. These assumptions were made during the development of the AIDJEX model in the 1970s, and are now found inadequate. The assumptions were made in part because of insufficient large-scale (10 km) deformation and stress data, and in part because of computer capability limitations. Upon reviewing deformation and stress data, it is clear that a model including deformation on discontinuities and an anisotropic failure surface with tension would better describe the behavior of pack ice. A model based on these assumptions is needed to represent the deformation and stress in pack ice on scales from 10 to 100 km, and would need to explicitly resolve discontinuities. Such a model would require a different class of metrics to validate discontinuities against observations.

  15. Discrete- vs. Continuous-Time Modeling of Unequally Spaced Experience Sampling Method Data.

    PubMed

    de Haan-Rietdijk, Silvia; Voelkle, Manuel C; Keijsers, Loes; Hamaker, Ellen L

    2017-01-01

    The Experience Sampling Method is a common approach in psychological research for collecting intensive longitudinal data with high ecological validity. One characteristic of ESM data is that it is often unequally spaced, because the measurement intervals within a day are deliberately varied, and measurement continues over several days. This poses a problem for discrete-time (DT) modeling approaches, which are based on the assumption that all measurements are equally spaced. Nevertheless, DT approaches such as (vector) autoregressive modeling are often used to analyze ESM data, for instance in the context of affective dynamics research. There are equivalent continuous-time (CT) models, but they are more difficult to implement. In this paper we take a pragmatic approach and evaluate the practical relevance of the violated model assumption in DT AR(1) and VAR(1) models, for the N = 1 case. We use simulated data under an ESM measurement design to investigate the bias in the parameters of interest under four different model implementations, ranging from the true CT model that accounts for all the exact measurement times, to the crudest possible DT model implementation, where even the nighttime is treated as a regular interval. An analysis of empirical affect data illustrates how the differences between DT and CT modeling can play out in practice. We find that the size and the direction of the bias in DT (V)AR models for unequally spaced ESM data depend quite strongly on the true parameter in addition to data characteristics. Our recommendation is to use CT modeling whenever possible, especially now that new software implementations have become available.

  16. Discrete- vs. Continuous-Time Modeling of Unequally Spaced Experience Sampling Method Data

    PubMed Central

    de Haan-Rietdijk, Silvia; Voelkle, Manuel C.; Keijsers, Loes; Hamaker, Ellen L.

    2017-01-01

    The Experience Sampling Method is a common approach in psychological research for collecting intensive longitudinal data with high ecological validity. One characteristic of ESM data is that it is often unequally spaced, because the measurement intervals within a day are deliberately varied, and measurement continues over several days. This poses a problem for discrete-time (DT) modeling approaches, which are based on the assumption that all measurements are equally spaced. Nevertheless, DT approaches such as (vector) autoregressive modeling are often used to analyze ESM data, for instance in the context of affective dynamics research. There are equivalent continuous-time (CT) models, but they are more difficult to implement. In this paper we take a pragmatic approach and evaluate the practical relevance of the violated model assumption in DT AR(1) and VAR(1) models, for the N = 1 case. We use simulated data under an ESM measurement design to investigate the bias in the parameters of interest under four different model implementations, ranging from the true CT model that accounts for all the exact measurement times, to the crudest possible DT model implementation, where even the nighttime is treated as a regular interval. An analysis of empirical affect data illustrates how the differences between DT and CT modeling can play out in practice. We find that the size and the direction of the bias in DT (V)AR models for unequally spaced ESM data depend quite strongly on the true parameter in addition to data characteristics. Our recommendation is to use CT modeling whenever possible, especially now that new software implementations have become available. PMID:29104554

  17. Using effort information with change-in-ratio data for population estimation

    USGS Publications Warehouse

    Udevitz, Mark S.; Pollock, Kenneth H.

    1995-01-01

    Most change-in-ratio (CIR) methods for estimating fish and wildlife population sizes have been based only on assumptions about how encounter probabilities vary among population subclasses. When information on sampling effort is available, it is also possible to derive CIR estimators based on assumptions about how encounter probabilities vary over time. This paper presents a generalization of previous CIR models that allows explicit consideration of a range of assumptions about the variation of encounter probabilities among subclasses and over time. Explicit estimators are derived under this model for specific sets of assumptions about the encounter probabilities. Numerical methods are presented for obtaining estimators under the full range of possible assumptions. Likelihood ratio tests for these assumptions are described. Emphasis is on obtaining estimators based on assumptions about variation of encounter probabilities over time.

  18. May common model biases reduce CMIP5's ability to simulate the recent Pacific La Niña-like cooling?

    NASA Astrophysics Data System (ADS)

    Luo, Jing-Jia; Wang, Gang; Dommenget, Dietmar

    2018-02-01

    Over the recent three decades sea surface temperate (SST) in the eastern equatorial Pacific has decreased, which helps reduce the rate of global warming. However, most CMIP5 model simulations with historical radiative forcing do not reproduce this Pacific La Niña-like cooling. Based on the assumption of "perfect" models, previous studies have suggested that errors in simulated internal climate variations and/or external radiative forcing may cause the discrepancy between the multi-model simulations and the observation. But the exact causes remain unclear. Recent studies have suggested that observed SST warming in the other two ocean basins in past decades and the thermostat mechanism in the Pacific in response to increased radiative forcing may also play an important role in driving this La Niña-like cooling. Here, we investigate an alternative hypothesis that common biases of current state-of-the-art climate models may deteriorate the models' ability and can also contribute to this multi-model simulations-observation discrepancy. Our results suggest that underestimated inter-basin warming contrast across the three tropical oceans, overestimated surface net heat flux and underestimated local SST-cloud negative feedback in the equatorial Pacific may favor an El Niño-like warming bias in the models. Effects of the three common model biases do not cancel one another and jointly explain 50% of the total variance of the discrepancies between the observation and individual models' ensemble mean simulations of the Pacific SST trend. Further efforts on reducing common model biases could help improve simulations of the externally forced climate trends and the multi-decadal climate fluctuations.

  19. Predictive performance models and multiple task performance

    NASA Technical Reports Server (NTRS)

    Wickens, Christopher D.; Larish, Inge; Contorer, Aaron

    1989-01-01

    Five models that predict how performance of multiple tasks will interact in complex task scenarios are discussed. The models are shown in terms of the assumptions they make about human operator divided attention. The different assumptions about attention are then empirically validated in a multitask helicopter flight simulation. It is concluded from this simulation that the most important assumption relates to the coding of demand level of different component tasks.

  20. Differentiating and evaluating common good and public good: making implicit assumptions explicit in the contexts of consent and duty to participate.

    PubMed

    Bialobrzeski, A; Ried, J; Dabrock, P

    2012-01-01

    The notions 'common good' and 'public good' are mostly used as synonyms in bioethical discussion of biobanks, but have different origins. As a consequence, they should be applied differently. In this article, the respective characteristics are worked out and then subsequently examined which consent models emerge from them. Distinguishing normative and descriptive traits of both concepts, it turns out that one concept is unjustly used, and that the other one fits better to the context of a plural society. A reflected use of these differing concepts may help to choose an appropriate form of consent and allows public trust in biobank research to deepen. Copyright © 2012 S. Karger AG, Basel.

  1. Computational models of basal-ganglia pathway functions: focus on functional neuroanatomy

    PubMed Central

    Schroll, Henning; Hamker, Fred H.

    2013-01-01

    Over the past 15 years, computational models have had a considerable impact on basal-ganglia research. Most of these models implement multiple distinct basal-ganglia pathways and assume them to fulfill different functions. As there is now a multitude of different models, it has become complex to keep track of their various, sometimes just marginally different assumptions on pathway functions. Moreover, it has become a challenge to oversee to what extent individual assumptions are corroborated or challenged by empirical data. Focusing on computational, but also considering non-computational models, we review influential concepts of pathway functions and show to what extent they are compatible with or contradict each other. Moreover, we outline how empirical evidence favors or challenges specific model assumptions and propose experiments that allow testing assumptions against each other. PMID:24416002

  2. A multigenerational effect of parental age on offspring size but not fitness in common duckweed (Lemna minor).

    PubMed

    Barks, P M; Laird, R A

    2016-04-01

    Classic theories on the evolution of senescence make the simplifying assumption that all offspring are of equal quality, so that demographic senescence only manifests through declining rates of survival or fecundity. However, there is now evidence that, in addition to declining rates of survival and fecundity, many organisms are subject to age-related declines in the quality of offspring produced (i.e. parental age effects). Recent modelling approaches allow for the incorporation of parental age effects into classic demographic analyses, assuming that such effects are limited to a single generation. Does this 'single-generation' assumption hold? To find out, we conducted a laboratory study with the aquatic plant Lemna minor, a species for which parental age effects have been demonstrated previously. We compared the size and fitness of 423 laboratory-cultured plants (asexually derived ramets) representing various birth orders, and ancestral 'birth-order genealogies'. We found that offspring size and fitness both declined with increasing 'immediate' birth order (i.e. birth order with respect to the immediate parent), but only offspring size was affected by ancestral birth order. Thus, the assumption that parental age effects on offspring fitness are limited to a single generation does in fact hold for L. minor. This result will guide theorists aiming to refine and generalize modelling approaches that incorporate parental age effects into evolutionary theory on senescence. © 2016 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2016 European Society For Evolutionary Biology.

  3. Comparative analysis of existing models for power-grid synchronization

    NASA Astrophysics Data System (ADS)

    Nishikawa, Takashi; Motter, Adilson E.

    2015-01-01

    The dynamics of power-grid networks is becoming an increasingly active area of research within the physics and network science communities. The results from such studies are typically insightful and illustrative, but are often based on simplifying assumptions that can be either difficult to assess or not fully justified for realistic applications. Here we perform a comprehensive comparative analysis of three leading models recently used to study synchronization dynamics in power-grid networks—a fundamental problem of practical significance given that frequency synchronization of all power generators in the same interconnection is a necessary condition for a power grid to operate. We show that each of these models can be derived from first principles within a common framework based on the classical model of a generator, thereby clarifying all assumptions involved. This framework allows us to view power grids as complex networks of coupled second-order phase oscillators with both forcing and damping terms. Using simple illustrative examples, test systems, and real power-grid datasets, we study the inherent frequencies of the oscillators as well as their coupling structure, comparing across the different models. We demonstrate, in particular, that if the network structure is not homogeneous, generators with identical parameters need to be modeled as non-identical oscillators in general. We also discuss an approach to estimate the required (dynamical) system parameters that are unavailable in typical power-grid datasets, their use for computing the constants of each of the three models, and an open-source MATLAB toolbox that we provide for these computations.

  4. Identifying gaps in conservation networks: of indicators and uncertainty in geographic-based analyses

    Treesearch

    Curtis H. Flather; Kenneth R. Wilson; Denis J. Dean; William C. McComb

    1997-01-01

    Mapping of biodiversity elements to expose gaps in. conservation networks has become a common strategy in nature-reserve design. We review a set of critical assumptions and issues that influence the interpretation and implementation of gap analysis, including: (1) the assumption that a subset of taxa can be used to indicate overall diversity patterns, and (2) the...

  5. Do forest community types provide a sufficient basis to evaluate biological diversity?

    Treesearch

    Samuel A. Cushman; Kevin S. McKelvey; Curtis H. Flather; Kevin McGarigal

    2008-01-01

    Forest communities, defined by the size and configuration of cover types and stand ages, have commonly been used as proxies for the abundance or viability of wildlife populations. However, for community types to succeed as proxies for species abundance, several assumptions must be met. We tested these assumptions for birds in an Oregon forest environment. Measured...

  6. Quantification of material state using reflectance FTIR spectroscopy

    NASA Astrophysics Data System (ADS)

    Criner, Amanda K.; Henry, Christine; Imel, Megan; King, Derek

    2018-04-01

    A common, frequently violated, assumption implicit in many data analysis techniques is that data is of the same quality across observations. The effect of this assumption is discussed and demonstrated in the example of FTIR of CMCs. An alternative analysis, which incorporates the variation in the quality of the data, is presented. A comparison between the analyses is used to demonstrate this difference.

  7. Unexpected Learning Competencies of Grades 5 and 6 Pupils in Public Elementary Schools: A Philippine Report

    ERIC Educational Resources Information Center

    Felipe, Abraham I.

    2006-01-01

    The present study tested the assumption of a positive and linear relation between years of schooling and school learning in the Philippine setting. It replicated a 1976 study that had cast doubt on this assumption in the Philippine public educational system. It tested three competing hypotheses for that finding: common sense, the 1976 arrested…

  8. Higher impact of female than male migration on population structure in large mammals.

    PubMed

    Tiedemann, R; Hardy, O; Vekemans, X; Milinkovitch, M C

    2000-08-01

    We simulated large mammal populations using an individual-based stochastic model under various sex-specific migration schemes and life history parameters from the blue whale and the Asian elephant. Our model predicts that genetic structure at nuclear loci is significantly more influenced by female than by male migration. We identified requisite comigration of mother and offspring during gravidity and lactation as the primary cause of this phenomenon. In addition, our model predicts that the common assumption that geographical patterns of mitochondrial DNA (mtDNA) could be translated into female migration rates (Nmf) will cause biased estimates of maternal gene flow when extensive male migration occurs and male mtDNA haplotypes are included in the analysis.

  9. A More Pedagogically Sound Treatment of Beer's Law: A Derivation Based on a Corpuscular-Probability Model

    NASA Astrophysics Data System (ADS)

    Bare, William D.

    2000-07-01

    An argument is presented which suggests that the commonly seen calculus-based derivations of Beer's law may not be adequately useful to students and may in fact contribute to widely held misconceptions about the interaction of light with absorbing samples. For this reason, an alternative derivation of Beer's law based on a corpuscular model and the laws of probability is presented. Unlike many previously reported derivations, that presented here does not require the use of calculus, nor does it require the assumption of absorption properties in an infinitesimally thin film. The corpuscular-probability model and its accompanying derivation of Beer's law are believed to comprise a more pedagogically effective presentation than those presented previously.

  10. A Unimodal Model for Double Observer Distance Sampling Surveys.

    PubMed

    Becker, Earl F; Christ, Aaron M

    2015-01-01

    Distance sampling is a widely used method to estimate animal population size. Most distance sampling models utilize a monotonically decreasing detection function such as a half-normal. Recent advances in distance sampling modeling allow for the incorporation of covariates into the distance model, and the elimination of the assumption of perfect detection at some fixed distance (usually the transect line) with the use of double-observer models. The assumption of full observer independence in the double-observer model is problematic, but can be addressed by using the point independence assumption which assumes there is one distance, the apex of the detection function, where the 2 observers are assumed independent. Aerially collected distance sampling data can have a unimodal shape and have been successfully modeled with a gamma detection function. Covariates in gamma detection models cause the apex of detection to shift depending upon covariate levels, making this model incompatible with the point independence assumption when using double-observer data. This paper reports a unimodal detection model based on a two-piece normal distribution that allows covariates, has only one apex, and is consistent with the point independence assumption when double-observer data are utilized. An aerial line-transect survey of black bears in Alaska illustrate how this method can be applied.

  11. Applying the compound Poisson process model to the reporting of injury-related mortality rates.

    PubMed

    Kegler, Scott R

    2007-02-16

    Injury-related mortality rate estimates are often analyzed under the assumption that case counts follow a Poisson distribution. Certain types of injury incidents occasionally involve multiple fatalities, however, resulting in dependencies between cases that are not reflected in the simple Poisson model and which can affect even basic statistical analyses. This paper explores the compound Poisson process model as an alternative, emphasizing adjustments to some commonly used interval estimators for population-based rates and rate ratios. The adjusted estimators involve relatively simple closed-form computations, which in the absence of multiple-case incidents reduce to familiar estimators based on the simpler Poisson model. Summary data from the National Violent Death Reporting System are referenced in several examples demonstrating application of the proposed methodology.

  12. Deblending using an improved apex-shifted hyperbolic radon transform based on the Stolt migration operator

    NASA Astrophysics Data System (ADS)

    Gong, Xiangbo; Feng, Fei; Jiao, Xuming; Wang, Shengchao

    2017-10-01

    Simultaneous seismic source separation, also known as deblending, is an essential process for blended acquisition. With the assumption that the blending noise is coherent in the common shot domain but is incoherent in other domains, traditional deblending methods are commonly performed in the common receiver, common midpoint or common offset domain. In this paper, we propose an improved apex-shifted hyperbolic radon transform (ASHRT) to deblend directly in the common shot domain. A time-axis stretch strategy named Stolt-stretch is introduced to overcome the limitation of the constant velocity assumption of Stolt-based operators. To improve the sparsity in the transform domain, a total variation (TV) norm inversion is implemented to enhance the energy convergence in the radon panel. Because of highly efficient Stolt migration and the demigration operator in the frequency-wavenumber domain, as well as the flexible geometry condition of the source-receiver, this approach is quite suitable for quality control (QC) during streamer acquisition. The synthetic and field examples demonstrate that our proposition is robust and efficient.

  13. Phylogenetic Analysis Supports the Aerobic-Capacity Model for the Evolution of Endothermy.

    PubMed

    Nespolo, Roberto F; Solano-Iguaran, Jaiber J; Bozinovic, Francisco

    2017-01-01

    The evolution of endothermy is a controversial topic in evolutionary biology, although several hypotheses have been proposed to explain it. To a great extent, the debate has centered on the aerobic-capacity model (AC model), an adaptive hypothesis involving maximum and resting rates of metabolism (MMR and RMR, respectively; hereafter "metabolic traits"). The AC model posits that MMR, a proxy of aerobic capacity and sustained activity, is the target of directional selection and that RMR is also influenced as a correlated response. Associated with this reasoning are the assumptions that (1) factorial aerobic scope (FAS; MMR/RMR) and net aerobic scope (NAS; MMR - RMR), two commonly used indexes of aerobic capacity, show different evolutionary optima and (2) the functional link between MMR and RMR is a basic design feature of vertebrates. To test these assumptions, we performed a comparative phylogenetic analysis in 176 vertebrate species, ranging from fish and amphibians to birds and mammals. Using disparity-through-time analysis, we also explored trait diversification and fitted different evolutionary models to study the evolution of metabolic traits. As predicted, we found (1) a positive phylogenetic correlation between RMR and MMR, (2) diversification of metabolic traits exceeding that of random-walk expectations, (3) that a model assuming selection fits the data better than alternative models, and (4) that a single evolutionary optimum best fits FAS data, whereas a model involving two optima (one for ectotherms and another for endotherms) is the best explanatory model for NAS. These results support the AC model and give novel information concerning the mode and tempo of physiological evolution of vertebrates.

  14. Biology meets physics: Reductionism and multi-scale modeling of morphogenesis.

    PubMed

    Green, Sara; Batterman, Robert

    2017-02-01

    A common reductionist assumption is that macro-scale behaviors can be described "bottom-up" if only sufficient details about lower-scale processes are available. The view that an "ideal" or "fundamental" physics would be sufficient to explain all macro-scale phenomena has been met with criticism from philosophers of biology. Specifically, scholars have pointed to the impossibility of deducing biological explanations from physical ones, and to the irreducible nature of distinctively biological processes such as gene regulation and evolution. This paper takes a step back in asking whether bottom-up modeling is feasible even when modeling simple physical systems across scales. By comparing examples of multi-scale modeling in physics and biology, we argue that the "tyranny of scales" problem presents a challenge to reductive explanations in both physics and biology. The problem refers to the scale-dependency of physical and biological behaviors that forces researchers to combine different models relying on different scale-specific mathematical strategies and boundary conditions. Analyzing the ways in which different models are combined in multi-scale modeling also has implications for the relation between physics and biology. Contrary to the assumption that physical science approaches provide reductive explanations in biology, we exemplify how inputs from physics often reveal the importance of macro-scale models and explanations. We illustrate this through an examination of the role of biomechanical modeling in developmental biology. In such contexts, the relation between models at different scales and from different disciplines is neither reductive nor completely autonomous, but interdependent. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. A Bayesian Multilevel Model for Microcystin Prediction in ...

    EPA Pesticide Factsheets

    The frequency of cyanobacteria blooms in North American lakes is increasing. A major concernwith rising cyanobacteria blooms is microcystin, a common cyanobacterial hepatotoxin. Toexplore the conditions that promote high microcystin concentrations, we analyzed the US EPANational Lake Assessment (NLA) dataset collected in the summer of 2007. The NLA datasetis reported for nine eco-regions. We used the results of random forest modeling as a means ofvariable selection from which we developed a Bayesian multilevel model of microcystin concentrations.Model parameters under a multilevel modeling framework are eco-region specific, butthey are also assumed to be exchangeable across eco-regions for broad continental scaling. Theexchangeability assumption ensures that both the common patterns and eco-region specific featureswill be reflected in the model. Furthermore, the method incorporates appropriate estimatesof uncertainty. Our preliminary results show associations between microcystin and turbidity, totalnutrients, and N:P ratios. The NLA 2012 will be used for Bayesian updating. The results willhelp develop management strategies to alleviate microcystin impacts and improve lake quality. This work provides a probabilistic framework for predicting microcystin presences in lakes. It would allow for insights to be made about how changes in nutrient concentrations could potentially change toxin levels.

  16. An Efficient Ray-Tracing Method for Determining Terrain Intercepts in EDL Simulations

    NASA Technical Reports Server (NTRS)

    Shidner, Jeremy D.

    2016-01-01

    The calculation of a ray's intercept from an arbitrary point in space to a prescribed surface is a common task in computer simulations. The arbitrary point often represents an object that is moving according to the simulation, while the prescribed surface is fixed in a defined frame. For detailed simulations, this surface becomes complex, taking the form of real-world objects such as mountains, craters or valleys which require more advanced methods to accurately calculate a ray's intercept location. Incorporation of these complex surfaces has commonly been implemented in graphics systems that utilize highly optimized graphics processing units to analyze such features. This paper proposes a simplified method that does not require computationally intensive graphics solutions, but rather an optimized ray-tracing method for an assumed terrain dataset. This approach was developed for the Mars Science Laboratory mission which landed on the complex terrain of Gale Crater. First, this paper begins with a discussion of the simulation used to implement the model and the applicability of finding surface intercepts with respect to atmosphere modeling, altitude determination, radar modeling, and contact forces influencing vehicle dynamics. Next, the derivation and assumptions of the intercept finding method are presented. Key assumptions are noted making the routines specific to only certain types of surface data sets that are equidistantly spaced in longitude and latitude. The derivation of the method relies on ray-tracing, requiring discussion on the formulation of the ray with respect to the terrain datasets. Further discussion includes techniques for ray initialization in order to optimize the intercept search. Then, the model implementation for various new applications in the simulation are demonstrated. Finally, a validation of the accuracy is presented along with the corresponding data sets used in the validation. A performance summary of the method will be shown using the analysis from the Mars Science Laboratory's terminal descent sensing model. Alternate uses will also be shown for determining horizon maps and orbiter set times.

  17. Axial-field permanent magnet motors for electric vehicles

    NASA Technical Reports Server (NTRS)

    Campbell, P.

    1981-01-01

    The modelling of an anisotropic alnico magnet for the purpose of field computation involves assigning a value for the material's permeability in the transverse direction. This is generally based upon the preferred direction properties, being all that are easily available. By analyzing the rotation of intrinsic magnetization due to the self demagnetizing field, it is shown that the common assumptions relating the transverse to the preferred direction are not accurate. Transverse magnetization characteristics are needed, and these are given for Alnico 5, 5-7, and 8 magnets, yielding appropriate permeability values.

  18. AGARD standard aeroelastic configurations for dynamic response. 1: Wing 445.6

    NASA Technical Reports Server (NTRS)

    Yates, E. Carson, Jr.

    1988-01-01

    This report contains experimental flutter data for the AGARD 3-D swept tapered standard configuration Wing 445.6, along with related descriptive data of the model properties required for comparative flutter calculations. As part of a cooperative AGARD-SMP program, guided by the Sub-Committee on Aeroelasticity, this standard configuration may serve as a common basis for comparison of calculated and measured aeroelastic behavior. These comparisons will promote a better understanding of the assumptions, approximations and limitations underlying the various aerodynamic methods applied, thus pointing the way to further improvements.

  19. Autoinflammation and HLA-B27: Beyond Antigen Presentation.

    PubMed

    Sibley, Cailin H

    2016-08-01

    HLA-B27 associated disorders comprise a group of inflammatory conditions which have in common an association with the HLA class I molecule, HLA-B27. Given this association, these diseases are classically considered disorders of adaptive immunity. However, mounting data are challenging this assumption and confirming that innate immunity plays a more prominent role in pathogenesis than previously suspected. In this review, the concept of autoinflammation is discussed and evidence is presented from human and animal models to support a key role for innate immunity in HLA-B27 associated disorders.

  20. Fission product ion exchange between zeolite and a molten salt

    NASA Astrophysics Data System (ADS)

    Gougar, Mary Lou D.

    The electrometallurgical treatment of spent nuclear fuel (SNF) has been developed at Argonne National Laboratory (ANL) and has been demonstrated through processing the sodium-bonded SNF from the Experimental Breeder Reactor-II in Idaho. In this process, components of the SNF, including U and species more chemically active than U, are oxidized into a bath of lithium-potassium chloride (LiCl-KCl) eutectic molten salt. Uranium is removed from the salt solution by electrochemical reduction. The noble metals and inactive fission products from the SNF remain as solids and are melted into a metal waste form after removal from the molten salt bath. The remaining salt solution contains most of the fission products and transuranic elements from the SNF. One technique that has been identified for removing these fission products and extending the usable life of the molten salt is ion exchange with zeolite A. A model has been developed and tested for its ability to describe the ion exchange of fission product species between zeolite A and a molten salt bath used for pyroprocessing of spent nuclear fuel. The model assumes (1) a system at equilibrium, (2) immobilization of species from the process salt solution via both ion exchange and occlusion in the zeolite cage structure, and (3) chemical independence of the process salt species. The first assumption simplifies the description of this physical system by eliminating the complications of including time-dependent variables. An equilibrium state between species concentrations in the two exchange phases is a common basis for ion exchange models found in the literature. Assumption two is non-simplifying with respect to the mathematical expression of the model. Two Langmuir-like fractional terms (one for each mode of immobilization) compose each equation describing each salt species. The third assumption offers great simplification over more traditional ion exchange modeling, in which interaction of solvent species with each other is considered. (Abstract shortened by UMI.)

  1. Ipsative imputation for a 15-item Geriatric Depression Scale in community-dwelling elderly people.

    PubMed

    Imai, Hissei; Furukawa, Toshiaki A; Kasahara, Yoriko; Ishimoto, Yasuko; Kimura, Yumi; Fukutomi, Eriko; Chen, Wen-Ling; Tanaka, Mire; Sakamoto, Ryota; Wada, Taizo; Fujisawa, Michiko; Okumiya, Kiyohito; Matsubayashi, Kozo

    2014-09-01

    Missing data are inevitable in almost all medical studies. Imputation methods using the probabilistic model are common, but they cannot impute individual data and require special software. In contrast, the ipsative imputation method, which substitutes the missing items by the mean of the remaining items within the individual, is easy and does not need any special software, but it can provide individual scores. The aim of the present study was to evaluate the validity of the ipsative imputation method using data involving the 15-item Geriatric Depression Scale. Participants were community-dwelling elderly individuals (n = 1178). A structural equation model was constructed. The model fit indexes were calculated to assess the validity of the imputation method when it is used for individuals who were missing 20% of data or less and 40% of data or less, depending on whether we assumed that their correlation coefficients were the same as the dataset with no missing items. Finally, we compared path coefficients of the dataset imputed by ipsative imputation with those by multiple imputation. When compared with the assumption that the datasets differed, all of the model fit indexes were better under the assumption that the dataset without missing data is the same as that that was missing 20% of data or less. However, by the same assumption, the model fit indexes were worse in the dataset that was missing 40% of data or less. The path coefficients of the dataset imputed by ipsative imputation and by multiple imputation were compatible with each other if the proportion of missing items was 20% or less. Ipsative imputation appears to be a valid imputation method and can be used to impute data in studies using the 15-item Geriatric Depression Scale, if the percentage of its missing items is 20% or less. © 2014 The Authors. Psychogeriatrics © 2014 Japanese Psychogeriatric Society.

  2. Analyzing the impact of modeling choices and assumptions in compartmental epidemiological models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nutaro, James J.; Pullum, Laura L.; Ramanathan, Arvind

    In this study, computational models have become increasingly used as part of modeling, predicting, and understanding how infectious diseases spread within large populations. These models can be broadly classified into differential equation-based models (EBM) and agent-based models (ABM). Both types of models are central in aiding public health officials design intervention strategies in case of large epidemic outbreaks. We examine these models in the context of illuminating their hidden assumptions and the impact these may have on the model outcomes. Very few ABM/EBMs are evaluated for their suitability to address a particular public health concern, and drawing relevant conclusions aboutmore » their suitability requires reliable and relevant information regarding the different modeling strategies and associated assumptions. Hence, there is a need to determine how the different modeling strategies, choices of various parameters, and the resolution of information for EBMs and ABMs affect outcomes, including predictions of disease spread. In this study, we present a quantitative analysis of how the selection of model types (i.e., EBM vs. ABM), the underlying assumptions that are enforced by model types to model the disease propagation process, and the choice of time advance (continuous vs. discrete) affect the overall outcomes of modeling disease spread. Our study reveals that the magnitude and velocity of the simulated epidemic depends critically on the selection of modeling principles, various assumptions of disease process, and the choice of time advance.« less

  3. Analyzing the impact of modeling choices and assumptions in compartmental epidemiological models

    DOE PAGES

    Nutaro, James J.; Pullum, Laura L.; Ramanathan, Arvind; ...

    2016-05-01

    In this study, computational models have become increasingly used as part of modeling, predicting, and understanding how infectious diseases spread within large populations. These models can be broadly classified into differential equation-based models (EBM) and agent-based models (ABM). Both types of models are central in aiding public health officials design intervention strategies in case of large epidemic outbreaks. We examine these models in the context of illuminating their hidden assumptions and the impact these may have on the model outcomes. Very few ABM/EBMs are evaluated for their suitability to address a particular public health concern, and drawing relevant conclusions aboutmore » their suitability requires reliable and relevant information regarding the different modeling strategies and associated assumptions. Hence, there is a need to determine how the different modeling strategies, choices of various parameters, and the resolution of information for EBMs and ABMs affect outcomes, including predictions of disease spread. In this study, we present a quantitative analysis of how the selection of model types (i.e., EBM vs. ABM), the underlying assumptions that are enforced by model types to model the disease propagation process, and the choice of time advance (continuous vs. discrete) affect the overall outcomes of modeling disease spread. Our study reveals that the magnitude and velocity of the simulated epidemic depends critically on the selection of modeling principles, various assumptions of disease process, and the choice of time advance.« less

  4. Modeling the Covariance Structure of Complex Datasets Using Cognitive Models: An Application to Individual Differences and the Heritability of Cognitive Ability.

    PubMed

    Evans, Nathan J; Steyvers, Mark; Brown, Scott D

    2018-06-05

    Understanding individual differences in cognitive performance is an important part of understanding how variations in underlying cognitive processes can result in variations in task performance. However, the exploration of individual differences in the components of the decision process-such as cognitive processing speed, response caution, and motor execution speed-in previous research has been limited. Here, we assess the heritability of the components of the decision process, with heritability having been a common aspect of individual differences research within other areas of cognition. Importantly, a limitation of previous work on cognitive heritability is the underlying assumption that variability in response times solely reflects variability in the speed of cognitive processing. This assumption has been problematic in other domains, due to the confounding effects of caution and motor execution speed on observed response times. We extend a cognitive model of decision-making to account for relatedness structure in a twin study paradigm. This approach can separately quantify different contributions to the heritability of response time. Using data from the Human Connectome Project, we find strong evidence for the heritability of response caution, and more ambiguous evidence for the heritability of cognitive processing speed and motor execution speed. Our study suggests that the assumption made in previous studies-that the heritability of cognitive ability is based on cognitive processing speed-may be incorrect. More generally, our methodology provides a useful avenue for future research in complex data that aims to analyze cognitive traits across different sources of related data, whether the relation is between people, tasks, experimental phases, or methods of measurement. © 2018 Cognitive Science Society, Inc.

  5. Analysis of partially observed clustered data using generalized estimating equations and multiple imputation

    PubMed Central

    Aloisio, Kathryn M.; Swanson, Sonja A.; Micali, Nadia; Field, Alison; Horton, Nicholas J.

    2015-01-01

    Clustered data arise in many settings, particularly within the social and biomedical sciences. As an example, multiple–source reports are commonly collected in child and adolescent psychiatric epidemiologic studies where researchers use various informants (e.g. parent and adolescent) to provide a holistic view of a subject’s symptomatology. Fitzmaurice et al. (1995) have described estimation of multiple source models using a standard generalized estimating equation (GEE) framework. However, these studies often have missing data due to additional stages of consent and assent required. The usual GEE is unbiased when missingness is Missing Completely at Random (MCAR) in the sense of Little and Rubin (2002). This is a strong assumption that may not be tenable. Other options such as weighted generalized estimating equations (WEEs) are computationally challenging when missingness is non–monotone. Multiple imputation is an attractive method to fit incomplete data models while only requiring the less restrictive Missing at Random (MAR) assumption. Previously estimation of partially observed clustered data was computationally challenging however recent developments in Stata have facilitated their use in practice. We demonstrate how to utilize multiple imputation in conjunction with a GEE to investigate the prevalence of disordered eating symptoms in adolescents reported by parents and adolescents as well as factors associated with concordance and prevalence. The methods are motivated by the Avon Longitudinal Study of Parents and their Children (ALSPAC), a cohort study that enrolled more than 14,000 pregnant mothers in 1991–92 and has followed the health and development of their children at regular intervals. While point estimates were fairly similar to the GEE under MCAR, the MAR model had smaller standard errors, while requiring less stringent assumptions regarding missingness. PMID:25642154

  6. Planning maximally smooth hand movements constrained to nonplanar workspaces.

    PubMed

    Liebermann, Dario G; Krasovsky, Tal; Berman, Sigal

    2008-11-01

    The article characterizes hand paths and speed profiles for movements performed in a nonplanar, 2-dimensional workspace (a hemisphere of constant curvature). The authors assessed endpoint kinematics (i.e., paths and speeds) under the minimum-jerk model assumptions and calculated minimal amplitude paths (geodesics) and the corresponding speed profiles. The authors also calculated hand speeds using the 2/3 power law. They then compared modeled results with the empirical observations. In all, 10 participants moved their hands forward and backward from a common starting position toward 3 targets located within a hemispheric workspace of small or large curvature. Comparisons of modeled observed differences using 2-way RM-ANOVAs showed that movement direction had no clear influence on hand kinetics (p < .05). Workspace curvature affected the hand paths, which seldom followed geodesic lines. Constraining the paths to different curvatures did not affect the hand speed profiles. Minimum-jerk speed profiles closely matched the observations and were superior to those predicted by 2/3 power law (p < .001). The authors conclude that speed and path cannot be unambiguously linked under the minimum-jerk assumption when individuals move the hand in a nonplanar 2-dimensional workspace. In such a case, the hands do not follow geodesic paths, but they preserve the speed profile, regardless of the geometric features of the workspace.

  7. Teaching "Instant Experience" with Graphical Model Validation Techniques

    ERIC Educational Resources Information Center

    Ekstrøm, Claus Thorn

    2014-01-01

    Graphical model validation techniques for linear normal models are often used to check the assumptions underlying a statistical model. We describe an approach to provide "instant experience" in looking at a graphical model validation plot, so it becomes easier to validate if any of the underlying assumptions are violated.

  8. Impact of Acoustic Radiation Force Excitation Geometry on Shear Wave Dispersion and Attenuation Estimates.

    PubMed

    Lipman, Samantha L; Rouze, Ned C; Palmeri, Mark L; Nightingale, Kathryn R

    2018-04-01

    Shear wave elasticity imaging (SWEI) characterizes the mechanical properties of human tissues to differentiate healthy from diseased tissue. Commercial scanners tend to reconstruct shear wave speeds for a region of interest using time-of-flight methods reporting a single shear wave speed (or elastic modulus) to the end user under the assumptions that tissue is elastic and shear wave speeds are not dependent on the frequency content of the shear waves. Human tissues, however, are known to be viscoelastic, resulting in dispersion and attenuation. Shear wave spectroscopy and spectral methods have been previously reported in the literature to quantify shear wave dispersion and attenuation, commonly making an assumption that the acoustic radiation force excitation acts as a cylindrical source with a known geometric shear wave amplitude decay. This work quantifies the bias in shear dispersion and attenuation estimates associated with making this cylindrical wave assumption when applied to shear wave sources with finite depth extents, as commonly occurs with realistic focal geometries, in elastic and viscoelastic media. Bias is quantified using analytically derived shear wave data and shear wave data generated using finite-element method models. Shear wave dispersion and attenuation bias (up to 15% for dispersion and 41% for attenuation) is greater for more tightly focused acoustic radiation force sources with smaller depths of field relative to their lateral extent (height-to-width ratios <16). Dispersion and attenuation errors associated with assuming a cylindrical geometric shear wave decay in SWEI can be appreciable and should be considered when analyzing the viscoelastic properties of tissues with acoustic radiation force source distributions with limited depths of field. Copyright © 2018 World Federation for Ultrasound in Medicine and Biology. Published by Elsevier Inc. All rights reserved.

  9. Nonequilibrium shock-heated nitrogen flows using a rovibrational state-to-state method

    NASA Astrophysics Data System (ADS)

    Panesi, M.; Munafò, A.; Magin, T. E.; Jaffe, R. L.

    2014-07-01

    A rovibrational collisional model is developed to study the internal energy excitation and dissociation processes behind a strong shock wave in a nitrogen flow. The reaction rate coefficients are obtained from the ab initio database of the NASA Ames Research Center. The master equation is coupled with a one-dimensional flow solver to study the nonequilibrium phenomena encountered in the gas during a hyperbolic reentry into Earth's atmosphere. The analysis of the populations of the rovibrational levels demonstrates how rotational and vibrational relaxation proceed at the same rate. This contrasts with the common misconception that translational and rotational relaxation occur concurrently. A significant part of the relaxation process occurs in non-quasi-steady-state conditions. Exchange processes are found to have a significant impact on the relaxation of the gas, while predissociation has a negligible effect. The results obtained by means of the full rovibrational collisional model are used to assess the validity of reduced order models (vibrational collisional and multitemperature) which are based on the same kinetic database. It is found that thermalization and dissociation are drastically overestimated by the reduced order models. The reasons of the failure differ in the two cases. In the vibrational collisional model the overestimation of the dissociation is a consequence of the assumption of equilibrium between the rotational energy and the translational energy. The multitemperature model fails to predict the correct thermochemical relaxation due to the failure of the quasi-steady-state assumption, used to derive the phenomenological rate coefficient for dissociation.

  10. Inherent limitations of probabilistic models for protein-DNA binding specificity

    PubMed Central

    Ruan, Shuxiang

    2017-01-01

    The specificities of transcription factors are most commonly represented with probabilistic models. These models provide a probability for each base occurring at each position within the binding site and the positions are assumed to contribute independently. The model is simple and intuitive and is the basis for many motif discovery algorithms. However, the model also has inherent limitations that prevent it from accurately representing true binding probabilities, especially for the highest affinity sites under conditions of high protein concentration. The limitations are not due to the assumption of independence between positions but rather are caused by the non-linear relationship between binding affinity and binding probability and the fact that independent normalization at each position skews the site probabilities. Generally probabilistic models are reasonably good approximations, but new high-throughput methods allow for biophysical models with increased accuracy that should be used whenever possible. PMID:28686588

  11. Common modeling system for digital simulation

    NASA Technical Reports Server (NTRS)

    Painter, Rick

    1994-01-01

    The Joint Modeling and Simulation System is a tri-service investigation into a common modeling framework for the development digital models. The basis for the success of this framework is an X-window-based, open systems architecture, object-based/oriented methodology, standard interface approach to digital model construction, configuration, execution, and post processing. For years Department of Defense (DOD) agencies have produced various weapon systems/technologies and typically digital representations of the systems/technologies. These digital representations (models) have also been developed for other reasons such as studies and analysis, Cost Effectiveness Analysis (COEA) tradeoffs, etc. Unfortunately, there have been no Modeling and Simulation (M&S) standards, guidelines, or efforts towards commonality in DOD M&S. The typical scenario is an organization hires a contractor to build hardware and in doing so an digital model may be constructed. Until recently, this model was not even obtained by the organization. Even if it was procured, it was on a unique platform, in a unique language, with unique interfaces, and, with the result being UNIQUE maintenance required. Additionally, the constructors of the model expended more effort in writing the 'infrastructure' of the model/simulation (e.g. user interface, database/database management system, data journalizing/archiving, graphical presentations, environment characteristics, other components in the simulation, etc.) than in producing the model of the desired system. Other side effects include: duplication of efforts; varying assumptions; lack of credibility/validation; and decentralization in policy and execution. J-MASS provides the infrastructure, standards, toolset, and architecture to permit M&S developers and analysts to concentrate on the their area of interest.

  12. An evaluation of complementary relationship assumptions

    NASA Astrophysics Data System (ADS)

    Pettijohn, J. C.; Salvucci, G. D.

    2004-12-01

    Complementary relationship (CR) models, based on Bouchet's (1963) somewhat heuristic CR hypothesis, are advantageous in their sole reliance on readily available climatological data. While Bouchet's CR hypothesis requires a number of questionable assumptions, CR models have been evaluated on variable time and length scales with relative success. Bouchet's hypothesis is grounded on the assumption that a change in potential evapotranspiration (Ep}) is equal and opposite in sign to a change in actual evapotranspiration (Ea), i.e., -dEp / dEa = 1. In his mathematical rationalization of the CR, Morton (1965) similarly assumes that a change in potential sensible heat flux (Hp) is equal and opposite in sign to a change in actual sensible heat flux (Ha), i.e., -dHp / dHa = 1. CR models have maintained these assumptions while focusing on defining Ep and equilibrium evapotranspiration (Epo). We question Bouchet and Morton's aforementioned assumptions by revisiting CR derivation in light of a proposed variable, φ = -dEp/dEa. We evaluate φ in a simplified Monin Obukhov surface similarity framework and demonstrate how previous error in the application of CR models may be explained in part by previous assumptions that φ =1. Finally, we discuss the various time and length scales to which φ may be evaluated.

  13. Analysis of dam-passage survival of yearling and subyearling Chinook salmon and juvenile steelhead at The Dalles Dam, Oregon, 2010

    USGS Publications Warehouse

    Beeman, John W.; Kock, Tobias J.; Perry, Russell W.; Smith, Steven G.

    2011-01-01

    We performed a series of analyses of mark-recapture data from a study at The Dalles Dam during 2010 to determine if model assumptions for estimation of juvenile salmonid dam-passage survival were met and if results were similar to those using the University of Washington's newly developed ATLAS software. The study was conducted by the Pacific Northwest National Laboratory and used acoustic telemetry of yearling Chinook salmon, juvenile steelhead, and subyearling Chinook salmon released at three sites according to the new virtual/paired-release statistical model. This was the first field application of the new model, and the results were used to measure compliance with minimum survival standards set forth in a recent Biological Opinion. Our analyses indicated that most model assumptions were met. The fish groups mixed in time and space, and no euthanized tagged fish were detected. Estimates of reach-specific survival were similar in fish tagged by each of the six taggers during the spring, but not in the summer. Tagger effort was unevenly allocated temporally during tagging of subyearling Chinook salmon in the summer; the difference in survival estimates among taggers was more likely a result of a temporal trend in actual survival than of tagger effects. The reach-specific survival of fish released at the three sites was not equal in the reaches they had in common for juvenile steelhead or subyearling Chinook salmon, violating one model assumption. This violation did not affect the estimate of dam-passage survival, because data from the common reaches were not used in its calculation. Contrary to expectation, precision of survival estimates was not improved by using the most parsimonious model of recapture probabilities instead of the fully parameterized model. Adjusting survival estimates for differences in fish travel times and tag lives increased the dam-passage survival estimate for yearling Chinook salmon by 0.0001 and for juvenile steelhead by 0.0004. The estimate was unchanged for subyearling Chinook salmon. The tag-life-adjusted dam-passage survival estimates from our analyses were 0.9641 (standard error [SE] 0.0096) for yearling Chinook salmon, 0.9534 (SE 0.0097) for juvenile steelhead, and 0.9404 (SE 0.0091) for subyearling Chinook salmon. These were within 0.0001 of estimates made by the University of Washington using the ATLAS software. Contrary to the intent of the virtual/paired-release model to adjust estimates of the paired-release model downward in order to account for differential handling mortality rates between release groups, random variation in survival estimates may result in an upward adjustment of survival relative to estimates from the paired-release model. Further investigation of this property of the virtual/paired-release model likely would prove beneficial. In addition, we suggest that differential selective pressures near release sites of the two control groups could bias estimates of dam-passage survival from the virtual/paired-release model.

  14. Radiation-induced total-deletion mutations in the human hprt gene: a biophysical model based on random walk interphase chromatin geometry

    NASA Technical Reports Server (NTRS)

    Wu, H.; Sachs, R. K.; Yang, T. C.

    1998-01-01

    PURPOSE: To develop a biophysical model that explains the sizes of radiation-induced hprt deletions. METHODS: Key assumptions: (1) Deletions are produced by two DSB that are closer than an interaction distance at the time of DSB induction; (2) Interphase chromatin is modelled by a biphasic random walk distribution; and (3) Misrejoining of DSB from two separate tracks dominates at low-LET and misrejoining of DSB from a single track dominates at high-LET. RESULTS: The size spectra for radiation-induced total deletions of the hprt gene are calculated. Comparing with the results of Yamada and coworkers for gamma-irradiated human fibroblasts the study finds that an interaction distance of 0.75 microm will fit both the absolute frequency and the size spectrum of the total deletions. It is also shown that high-LET radiations produce, relatively, more total deletions of sizes below 0.5 Mb. The model predicts an essential gene to be located between 2 and 3 Mb from the hprt locus towards the centromere. Using the same assumptions and parameters as for evaluating mutation frequencies, a frequency of intra-arm chromosome deletions is calculated that is in agreement with experimental data. CONCLUSIONS: Radiation-induced total-deletion mutations of the human hprt gene and intrachange chromosome aberrations share a common mechanism for their induction.

  15. Inductive reasoning 2.0.

    PubMed

    Hayes, Brett K; Heit, Evan

    2018-05-01

    Inductive reasoning entails using existing knowledge to make predictions about novel cases. The first part of this review summarizes key inductive phenomena and critically evaluates theories of induction. We highlight recent theoretical advances, with a special emphasis on the structured statistical approach, the importance of sampling assumptions in Bayesian models, and connectionist modeling. A number of new research directions in this field are identified including comparisons of inductive and deductive reasoning, the identification of common core processes in induction and memory tasks and induction involving category uncertainty. The implications of induction research for areas as diverse as complex decision-making and fear generalization are discussed. This article is categorized under: Psychology > Reasoning and Decision Making Psychology > Learning. © 2017 Wiley Periodicals, Inc.

  16. Reconceptualising the doctor-patient relationship: recognising the role of trust in contemporary health care.

    PubMed

    Bending, Zara J

    2015-06-01

    The conception of the doctor-patient relationship under Australian law has followed British common law tradition whereby the relationship is founded in a contractual exchange. By contrast, this article presents a rationale and framework for an alternative model-a "Trust Model"-for implementation into law to more accurately reflect the contemporary therapeutic dynamic. The framework has four elements: (i) an assumption that professional conflicts (actual or perceived) with patient safety, motivated by financial or personal interests, should be avoided; (ii) an onus on doctors to disclose these conflicts; (iii) a proposed mechanism to contend with instances where doctors choose not to disclose; and (iv) sanctions for non-compliance with the regime.

  17. Droplets size evolution of dispersion in a stirred tank

    NASA Astrophysics Data System (ADS)

    Kysela, Bohus; Konfrst, Jiri; Chara, Zdenek; Sulc, Radek; Jasikova, Darina

    2018-06-01

    Dispersion of two immiscible liquids is commonly used in chemical industry as wall as in metallurgical industry e. g. extraction process. The governing property is droplet size distribution. The droplet sizes are given by the physical properties of both liquids and flow properties inside a stirred tank. The first investigation stage is focused on in-situ droplet size measurement using image analysis and optimizing of the evaluation method to achieve maximal result reproducibility. The obtained experimental results are compared with multiphase flow simulation based on Euler-Euler approach combined with PBM (Population Balance Modelling). The population balance model was, in that specific case, simplified with assumption of pure breakage of droplets.

  18. Non-steller light from high-redshift radiogalaxies

    NASA Technical Reports Server (NTRS)

    Rawlings, Steve; Eales, Stephen A.

    1990-01-01

    With the aid of a new IRCAM image of 3C356, researchers question the common assumption that radiosource-stimulated starbursts are responsible for the extended optical emission aligned with radio structures in high-redshift radiogalaxies. They propose an alternative model in which the radiation from a hidden luminous quasar is beamed along the radio axis and illuminates dense clumps of cool gas to produce both extended narrow emission line regions and, by Thomson scattering, extended optical continua. Simple observational tests of this model are possible and necessary if we are to continue to accept that the color, magnitude and shape evolution of radiogalaxies are controlled by the active evolution of stellar populations.

  19. Random Dynamics

    NASA Astrophysics Data System (ADS)

    Bennett, D. L.; Brene, N.; Nielsen, H. B.

    1987-01-01

    The goal of random dynamics is the derivation of the laws of Nature as we know them (standard model) from inessential assumptions. The inessential assumptions made here are expressed as sets of general models at extremely high energies: gauge glass and spacetime foam. Both sets of models lead tentatively to the standard model.

  20. Revisiting "The Master's Tools": Challenging Common Sense in Cross-Cultural Teacher Education

    ERIC Educational Resources Information Center

    Chinnery, Ann

    2008-01-01

    According to Kevin Kumashiro (2004), education toward a socially just society requires a commitment to challenge common sense notions or assumptions about the world and about teaching and learning. Recalling Audre Lorde's (1984) classic essay, "The Master's Tools Will Never Dismantle the Master's House," I focus on three common sense notions and…

  1. A Generalized QMRA Beta-Poisson Dose-Response Model.

    PubMed

    Xie, Gang; Roiko, Anne; Stratton, Helen; Lemckert, Charles; Dunn, Peter K; Mengersen, Kerrie

    2016-10-01

    Quantitative microbial risk assessment (QMRA) is widely accepted for characterizing the microbial risks associated with food, water, and wastewater. Single-hit dose-response models are the most commonly used dose-response models in QMRA. Denoting PI(d) as the probability of infection at a given mean dose d, a three-parameter generalized QMRA beta-Poisson dose-response model, PI(d|α,β,r*), is proposed in which the minimum number of organisms required for causing infection, K min , is not fixed, but a random variable following a geometric distribution with parameter 0

  2. Alternatives for discounting in the analysis of noninferiority trials.

    PubMed

    Snapinn, Steven M

    2004-05-01

    Determining the efficacy of an experimental therapy relative to placebo on the basis of an active-control noninferiority trial requires reference to historical placebo-controlled trials. The validity of the resulting comparison depends on two key assumptions: assay sensitivity and constancy. Since the truth of these assumptions cannot be verified, it seems logical to raise the standard of evidence required to declare efficacy; this concept is referred to as discounting. It is not often recognized that two common design and analysis approaches, setting a noninferiority margin and requiring preservation of a fraction of the standard therapy's effect, are forms of discounting. The noninferiority margin is a particularly poor approach, since its degree of discounting depends on an irrelevant factor. Preservation of effect is more reasonable, but it addresses only the constancy assumption, not the issue of assay sensitivity. Gaining consensus on the most appropriate approach to the design and analysis of noninferiority trials will require a common understanding of the concept of discounting.

  3. What information is necessary for speech categorization? Harnessing variability in the speech signal by integrating cues computed relative to expectations

    PubMed Central

    McMurray, Bob; Jongman, Allard

    2012-01-01

    Most theories of categorization emphasize how continuous perceptual information is mapped to categories. However, equally important is the informational assumptions of a model, the type of information subserving this mapping. This is crucial in speech perception where the signal is variable and context-dependent. This study assessed the informational assumptions of several models of speech categorization, in particular, the number of cues that are the basis of categorization and whether these cues represent the input veridically or have undergone compensation. We collected a corpus of 2880 fricative productions (Jongman, Wayland & Wong, 2000) spanning many talker- and vowel-contexts and measured 24 cues for each. A subset was also presented to listeners in an 8AFC phoneme categorization task. We then trained a common classification model based on logistic regression to categorize the fricative from the cue values, and manipulated the information in the training set to contrast 1) models based on a small number of invariant cues; 2) models using all cues without compensation, and 3) models in which cues underwent compensation for contextual factors. Compensation was modeled by Computing Cues Relative to Expectations (C-CuRE), a new approach to compensation that preserves fine-grained detail in the signal. Only the compensation model achieved a similar accuracy to listeners, and showed the same effects of context. Thus, even simple categorization metrics can overcome the variability in speech when sufficient information is available and compensation schemes like C-CuRE are employed. PMID:21417542

  4. The Complex Structure of Receptive Fields in the Middle Temporal Area

    PubMed Central

    Richert, Micah; Albright, Thomas D.; Krekelberg, Bart

    2012-01-01

    Neurons in the middle temporal area (MT) are often viewed as motion detectors that prefer a single direction of motion in a single region of space. This assumption plays an important role in our understanding of visual processing, and models of motion processing in particular. We used extracellular recordings in area MT of awake, behaving monkeys (M. mulatta) to test this assumption with a novel reverse correlation approach. Nearly half of the MT neurons in our sample deviated significantly from the classical view. First, in many cells, direction preference changed with the location of the stimulus within the receptive field. Second, the spatial response profile often had multiple peaks with apparent gaps in between. This shows that visual motion analysis in MT has access to motion detectors that are more complex than commonly thought. This complexity could be a mere byproduct of imperfect development, but can also be understood as the natural consequence of the non-linear, recurrent interactions among laterally connected MT neurons. An important direction for future research is to investigate whether these in homogeneities are advantageous, how they can be incorporated into models of motion detection, and whether they can provide quantitative insight into the underlying effective connectivity. PMID:23508640

  5. Bird-vegetation associations in thinned and unthinned young Douglas-fir forests 10 years after thinning

    USGS Publications Warehouse

    Yegorova, Svetlana; Betts, Matthew G.; Hagar, Joan; Puettmann, Klaus J.

    2013-01-01

    Quantitative associations between animals and vegetation have long been used as a basis for conservation and management, as well as in formulating predictions about the influence of resource management and climate change on populations. A fundamental assumption embedded in the use of such correlations is that they remain relatively consistent over time. However, this assumption of stationarity has been rarely tested – even for forest birds, which are frequently considered to be 'indicator species' in management operations. We investigated the temporal dynamics of bird-vegetation relationships in young Douglas-fir (Pseudotsuga menziesii) forests over more than a decade following initial anthropogenic disturbance (commercial thinning). We modeled bird occurrence or abundance as a function of vegetation characteristics for eight common bird species for each of six breeding seasons following forest thinning. Generally, vegetation relationships were highly inconsistent in magnitude across years, but remained positive or negative within species. For 3 species, relationships that were initially strong dampened over time. For other species, strength of vegetation association was apparently stochastic. These findings indicate that caution should be used when interpreting weak bird-vegetation relationships found in short-term studies and parameterizing predictive models with data collected over the short term.

  6. Modeling of Heat Transfer in Rooms in the Modelica "Buildings" Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wetter, Michael; Zuo, Wangda; Nouidui, Thierry Stephane

    This paper describes the implementation of the room heat transfer model in the free open-source Modelica \\Buildings" library. The model can be used as a single room or to compose a multizone building model. We discuss how the model is decomposed into submodels for the individual heat transfer phenomena. We also discuss the main physical assumptions. The room model can be parameterized to use different modeling assumptions, leading to linear or non-linear differential algebraic systems of equations. We present numerical experiments that show how these assumptions affect computing time and accuracy for selected cases of the ANSI/ASHRAE Standard 140- 2007more » envelop validation tests.« less

  7. Questioning Engelhardt's assumptions in Bioethics and Secular Humanism.

    PubMed

    Ahmadi Nasab Emran, Shahram

    2016-06-01

    In Bioethics and Secular Humanism: The Search for a Common Morality, Tristram Engelhardt examines various possibilities of finding common ground for moral discourse among people from different traditions and concludes their futility. In this paper I will argue that many of the assumptions on which Engelhardt bases his conclusion about the impossibility of a content-full secular bioethics are problematic. By starting with the notion of moral strangers, there is no possibility, by definition, for a content-full moral discourse among moral strangers. It means that there is circularity in starting the inquiry with a definition of moral strangers, which implies that they do not share enough moral background or commitment to an authority to allow for reaching a moral agreement, and concluding that content-full morality is impossible among moral strangers. I argue that assuming traditions as solid and immutable structures that insulate people across their boundaries is problematic. Another questionable assumption in Engelhardt's work is the idea that religious and philosophical traditions provide content-full moralities. As the cardinal assumption in Engelhardt's review of the various alternatives for a content-full moral discourse among moral strangers, I analyze his foundationalist account of moral reasoning and knowledge and indicate the possibility of other ways of moral knowledge, besides the foundationalist one. Then, I examine Engelhardt's view concerning the futility of attempts at justifying a content-full secular bioethics, and indicate how the assumptions have shaped Engelhardt's critique of the alternatives for the possibility of content-full secular bioethics.

  8. Stimulus-specific variability in color working memory with delayed estimation.

    PubMed

    Bae, Gi-Yeul; Olkkonen, Maria; Allred, Sarah R; Wilson, Colin; Flombaum, Jonathan I

    2014-04-08

    Working memory for color has been the central focus in an ongoing debate concerning the structure and limits of visual working memory. Within this area, the delayed estimation task has played a key role. An implicit assumption in color working memory research generally, and delayed estimation in particular, is that the fidelity of memory does not depend on color value (and, relatedly, that experimental colors have been sampled homogeneously with respect to discriminability). This assumption is reflected in the common practice of collapsing across trials with different target colors when estimating memory precision and other model parameters. Here we investigated whether or not this assumption is secure. To do so, we conducted delayed estimation experiments following standard practice with a memory load of one. We discovered that different target colors evoked response distributions that differed widely in dispersion and that these stimulus-specific response properties were correlated across observers. Subsequent experiments demonstrated that stimulus-specific responses persist under higher memory loads and that at least part of the specificity arises in perception and is eventually propagated to working memory. Posthoc stimulus measurement revealed that rendered stimuli differed from nominal stimuli in both chromaticity and luminance. We discuss the implications of these deviations for both our results and those from other working memory studies.

  9. A weighted U-statistic for genetic association analyses of sequencing data.

    PubMed

    Wei, Changshuai; Li, Ming; He, Zihuai; Vsevolozhskaya, Olga; Schaid, Daniel J; Lu, Qing

    2014-12-01

    With advancements in next-generation sequencing technology, a massive amount of sequencing data is generated, which offers a great opportunity to comprehensively investigate the role of rare variants in the genetic etiology of complex diseases. Nevertheless, the high-dimensional sequencing data poses a great challenge for statistical analysis. The association analyses based on traditional statistical methods suffer substantial power loss because of the low frequency of genetic variants and the extremely high dimensionality of the data. We developed a Weighted U Sequencing test, referred to as WU-SEQ, for the high-dimensional association analysis of sequencing data. Based on a nonparametric U-statistic, WU-SEQ makes no assumption of the underlying disease model and phenotype distribution, and can be applied to a variety of phenotypes. Through simulation studies and an empirical study, we showed that WU-SEQ outperformed a commonly used sequence kernel association test (SKAT) method when the underlying assumptions were violated (e.g., the phenotype followed a heavy-tailed distribution). Even when the assumptions were satisfied, WU-SEQ still attained comparable performance to SKAT. Finally, we applied WU-SEQ to sequencing data from the Dallas Heart Study (DHS), and detected an association between ANGPTL 4 and very low density lipoprotein cholesterol. © 2014 WILEY PERIODICALS, INC.

  10. Discriminating Among Probability Weighting Functions Using Adaptive Design Optimization

    PubMed Central

    Cavagnaro, Daniel R.; Pitt, Mark A.; Gonzalez, Richard; Myung, Jay I.

    2014-01-01

    Probability weighting functions relate objective probabilities and their subjective weights, and play a central role in modeling choices under risk within cumulative prospect theory. While several different parametric forms have been proposed, their qualitative similarities make it challenging to discriminate among them empirically. In this paper, we use both simulation and choice experiments to investigate the extent to which different parametric forms of the probability weighting function can be discriminated using adaptive design optimization, a computer-based methodology that identifies and exploits model differences for the purpose of model discrimination. The simulation experiments show that the correct (data-generating) form can be conclusively discriminated from its competitors. The results of an empirical experiment reveal heterogeneity between participants in terms of the functional form, with two models (Prelec-2, Linear in Log Odds) emerging as the most common best-fitting models. The findings shed light on assumptions underlying these models. PMID:24453406

  11. RAD-ADAPT: Software for modelling clonogenic assay data in radiation biology.

    PubMed

    Zhang, Yaping; Hu, Kaiqiang; Beumer, Jan H; Bakkenist, Christopher J; D'Argenio, David Z

    2017-04-01

    We present a comprehensive software program, RAD-ADAPT, for the quantitative analysis of clonogenic assays in radiation biology. Two commonly used models for clonogenic assay analysis, the linear-quadratic model and single-hit multi-target model, are included in the software. RAD-ADAPT uses maximum likelihood estimation method to obtain parameter estimates with the assumption that cell colony count data follow a Poisson distribution. The program has an intuitive interface, generates model prediction plots, tabulates model parameter estimates, and allows automatic statistical comparison of parameters between different groups. The RAD-ADAPT interface is written using the statistical software R and the underlying computations are accomplished by the ADAPT software system for pharmacokinetic/pharmacodynamic systems analysis. The use of RAD-ADAPT is demonstrated using an example that examines the impact of pharmacologic ATM and ATR kinase inhibition on human lung cancer cell line A549 after ionizing radiation. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. The importance of being equivalent: Newton's two models of one-body motion

    NASA Astrophysics Data System (ADS)

    Pourciau, Bruce

    2004-05-01

    As an undergraduate at Cambridge, Newton entered into his "Waste Book" an assumption that we have named the Equivalence Assumption (The Younger): "If a body move progressively in some crooked line [about a center of motion] ..., [then this] crooked line may bee conceived to consist of an infinite number of streight lines. Or else in any point of the croked line the motion may bee conceived to be on in the tangent". In this assumption, Newton somewhat imprecisely describes two mathematical models, a "polygonal limit model" and a "tangent deflected model", for "one-body motion", that is, for the motion of a "body in orbit about a fixed center", and then claims that these two models are equivalent. In the first part of this paper, we study the Principia to determine how the elder Newton would more carefully describe the polygonal limit and tangent deflected models. From these more careful descriptions, we then create Equivalence Assumption (The Elder), a precise interpretation of Equivalence Assumption (The Younger) as it might have been restated by Newton, after say 1687. We then review certain portions of the Waste Book and the Principia to make the case that, although Newton never restates nor even alludes to the Equivalence Assumption after his youthful Waste Book entry, still the polygonal limit and tangent deflected models, as well as an unspoken belief in their equivalence, infuse Newton's work on orbital motion. In particular, we show that the persuasiveness of the argument for the Area Property in Proposition 1 of the Principia depends crucially on the validity of Equivalence Assumption (The Elder). After this case is made, we present the mathematical analysis required to establish the validity of the Equivalence Assumption (The Elder). Finally, to illustrate the fundamental nature of the resulting theorem, the Equivalence Theorem as we call it, we present three significant applications: we use the Equivalence Theorem first to clarify and resolve questions related to Leibniz's "polygonal model" of one-body motion; then to repair Newton's argument for the Area Property in Proposition 1; and finally to clarify and resolve questions related to the transition from impulsive to continuous forces in "De motu" and the Principia.

  13. Mathematization Competencies of Pre-Service Elementary Mathematics Teachers in the Mathematical Modelling Process

    ERIC Educational Resources Information Center

    Yilmaz, Suha; Tekin-Dede, Ayse

    2016-01-01

    Mathematization competency is considered in the field as the focus of modelling process. Considering the various definitions, the components of the mathematization competency are determined as identifying assumptions, identifying variables based on the assumptions and constructing mathematical model/s based on the relations among identified…

  14. Differential Contribution of Low- and High-level Image Content to Eye Movements in Monkeys and Humans.

    PubMed

    Wilming, Niklas; Kietzmann, Tim C; Jutras, Megan; Xue, Cheng; Treue, Stefan; Buffalo, Elizabeth A; König, Peter

    2017-01-01

    Oculomotor selection exerts a fundamental impact on our experience of the environment. To better understand the underlying principles, researchers typically rely on behavioral data from humans, and electrophysiological recordings in macaque monkeys. This approach rests on the assumption that the same selection processes are at play in both species. To test this assumption, we compared the viewing behavior of 106 humans and 11 macaques in an unconstrained free-viewing task. Our data-driven clustering analyses revealed distinct human and macaque clusters, indicating species-specific selection strategies. Yet, cross-species predictions were found to be above chance, indicating some level of shared behavior. Analyses relying on computational models of visual saliency indicate that such cross-species commonalities in free viewing are largely due to similar low-level selection mechanisms, with only a small contribution by shared higher level selection mechanisms and with consistent viewing behavior of monkeys being a subset of the consistent viewing behavior of humans. © The Author 2017. Published by Oxford University Press.

  15. Differential Contribution of Low- and High-level Image Content to Eye Movements in Monkeys and Humans

    PubMed Central

    Wilming, Niklas; Kietzmann, Tim C.; Jutras, Megan; Xue, Cheng; Treue, Stefan; Buffalo, Elizabeth A.; König, Peter

    2017-01-01

    Abstract Oculomotor selection exerts a fundamental impact on our experience of the environment. To better understand the underlying principles, researchers typically rely on behavioral data from humans, and electrophysiological recordings in macaque monkeys. This approach rests on the assumption that the same selection processes are at play in both species. To test this assumption, we compared the viewing behavior of 106 humans and 11 macaques in an unconstrained free-viewing task. Our data-driven clustering analyses revealed distinct human and macaque clusters, indicating species-specific selection strategies. Yet, cross-species predictions were found to be above chance, indicating some level of shared behavior. Analyses relying on computational models of visual saliency indicate that such cross-species commonalities in free viewing are largely due to similar low-level selection mechanisms, with only a small contribution by shared higher level selection mechanisms and with consistent viewing behavior of monkeys being a subset of the consistent viewing behavior of humans. PMID:28077512

  16. A Taxonomy of Latent Structure Assumptions for Probability Matrix Decomposition Models.

    ERIC Educational Resources Information Center

    Meulders, Michel; De Boeck, Paul; Van Mechelen, Iven

    2003-01-01

    Proposed a taxonomy of latent structure assumptions for probability matrix decomposition (PMD) that includes the original PMD model and a three-way extension of the multiple classification latent class model. Simulation study results show the usefulness of the taxonomy. (SLD)

  17. A complete graphical criterion for the adjustment formula in mediation analysis.

    PubMed

    Shpitser, Ilya; VanderWeele, Tyler J

    2011-03-04

    Various assumptions have been used in the literature to identify natural direct and indirect effects in mediation analysis. These effects are of interest because they allow for effect decomposition of a total effect into a direct and indirect effect even in the presence of interactions or non-linear models. In this paper, we consider the relation and interpretation of various identification assumptions in terms of causal diagrams interpreted as a set of non-parametric structural equations. We show that for such causal diagrams, two sets of assumptions for identification that have been described in the literature are in fact equivalent in the sense that if either set of assumptions holds for all models inducing a particular causal diagram, then the other set of assumptions will also hold for all models inducing that diagram. We moreover build on prior work concerning a complete graphical identification criterion for covariate adjustment for total effects to provide a complete graphical criterion for using covariate adjustment to identify natural direct and indirect effects. Finally, we show that this criterion is equivalent to the two sets of independence assumptions used previously for mediation analysis.

  18. Determining informative priors for cognitive models.

    PubMed

    Lee, Michael D; Vanpaemel, Wolf

    2018-02-01

    The development of cognitive models involves the creative scientific formalization of assumptions, based on theory, observation, and other relevant information. In the Bayesian approach to implementing, testing, and using cognitive models, assumptions can influence both the likelihood function of the model, usually corresponding to assumptions about psychological processes, and the prior distribution over model parameters, usually corresponding to assumptions about the psychological variables that influence those processes. The specification of the prior is unique to the Bayesian context, but often raises concerns that lead to the use of vague or non-informative priors in cognitive modeling. Sometimes the concerns stem from philosophical objections, but more often practical difficulties with how priors should be determined are the stumbling block. We survey several sources of information that can help to specify priors for cognitive models, discuss some of the methods by which this information can be formalized in a prior distribution, and identify a number of benefits of including informative priors in cognitive modeling. Our discussion is based on three illustrative cognitive models, involving memory retention, categorization, and decision making.

  19. Using GIS databases for simulated nightlight imagery

    NASA Astrophysics Data System (ADS)

    Zollweg, Joshua D.; Gartley, Michael; Roskovensky, John; Mercier, Jeffery

    2012-06-01

    Proposed is a new technique for simulating nighttime scenes with realistically-modelled urban radiance. While nightlight imagery is commonly used to measure urban sprawl,1 it is uncommon to use urbanization as metric to develop synthetic nighttime scenes. In the developed methodology, the open-source Open Street Map (OSM) Geographic Information System (GIS) database is used. The database is comprised of many nodes, which are used to dene the position of dierent types of streets, buildings, and other features. These nodes are the driver used to model urban nightlights, given several assumptions. The rst assumption is that the spatial distribution of nodes is closely related to the spatial distribution of nightlights. Work by Roychowdhury et al has demonstrated the relationship between urban lights and development. 2 So, the real assumption being made is that the density of nodes corresponds to development, which is reasonable. Secondly, the local density of nodes must relate directly to the upwelled radiance within the given locality. Testing these assumptions using Albuquerque and Indianapolis as example cities revealed that dierent types of nodes produce more realistic results than others. Residential street nodes oered the best performance for any single node type, among the types tested in this investigation. Other node types, however, still provide useful supplementary data. Using streets and buildings dened in the OSM database allowed automated generation of simulated nighttime scenes of Albuquerque and Indianapolis in the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model. The simulation was compared to real data from the recently deployed National Polar-orbiting Operational Environmental Satellite System(NPOESS) Visible Infrared Imager Radiometer Suite (VIIRS) platform. As a result of the comparison, correction functions were used to correct for discrepancies between simulated and observed radiance. Future work will include investigating more advanced approaches for mapping the spatial extent of nightlights, based on the distribution of dierent node types in local neighbourhoods. This will allow the spectral prole of each region to be dynamically adjusted, in addition to simply modifying the magnitude of a single source type.

  20. Suppression of Metastasis by Primary Tumor and Acceleration of Metastasis Following Primary Tumor Resection: A Natural Law?

    PubMed

    Hanin, Leonid; Rose, Jason

    2018-03-01

    We study metastatic cancer progression through an extremely general individual-patient mathematical model that is rooted in the contemporary understanding of the underlying biomedical processes yet is essentially free of specific biological assumptions of mechanistic nature. The model accounts for primary tumor growth and resection, shedding of metastases off the primary tumor and their selection, dormancy and growth in a given secondary site. However, functional parameters descriptive of these processes are assumed to be essentially arbitrary. In spite of such generality, the model allows for computing the distribution of site-specific sizes of detectable metastases in closed form. Under the assumption of exponential growth of metastases before and after primary tumor resection, we showed that, regardless of other model parameters and for every set of site-specific volumes of detected metastases, the model-based likelihood-maximizing scenario is always the same: complete suppression of metastatic growth before primary tumor resection followed by an abrupt growth acceleration after surgery. This scenario is commonly observed in clinical practice and is supported by a wealth of experimental and clinical studies conducted over the last 110 years. Furthermore, several biological mechanisms have been identified that could bring about suppression of metastasis by the primary tumor and accelerated vascularization and growth of metastases after primary tumor resection. To the best of our knowledge, the methodology for uncovering general biomedical principles developed in this work is new.

  1. Typecasting catchments: Classification, directionality, and the pursuit of universality

    NASA Astrophysics Data System (ADS)

    Smith, Tyler; Marshall, Lucy; McGlynn, Brian

    2018-02-01

    Catchment classification poses a significant challenge to hydrology and hydrologic modeling, restricting widespread transfer of knowledge from well-studied sites. The identification of important physical, climatological, or hydrologic attributes (to varying degrees depending on application/data availability) has traditionally been the focus for catchment classification. Classification approaches are regularly assessed with regard to their ability to provide suitable hydrologic predictions - commonly by transferring fitted hydrologic parameters at a data-rich catchment to a data-poor catchment deemed similar by the classification. While such approaches to hydrology's grand challenges are intuitive, they often ignore the most uncertain aspect of the process - the model itself. We explore catchment classification and parameter transferability and the concept of universal donor/acceptor catchments. We identify the implications of the assumption that the transfer of parameters between "similar" catchments is reciprocal (i.e., non-directional). These concepts are considered through three case studies situated across multiple gradients that include model complexity, process description, and site characteristics. Case study results highlight that some catchments are more successfully used as donor catchments and others are better suited as acceptor catchments. These results were observed for both black-box and process consistent hydrologic models, as well as for differing levels of catchment similarity. Therefore, we suggest that similarity does not adequately satisfy the underlying assumptions being made in parameter regionalization approaches regardless of model appropriateness. Furthermore, we suggest that the directionality of parameter transfer is an important factor in determining the success of parameter regionalization approaches.

  2. Impacts of Worldview, Implicit Assumptions, Biases, and Groupthink on Israeli Operational Plans in 1973

    DTIC Science & Technology

    2013-05-23

    is called worldview. It determines how individuals interpret everything. In his book, Toward a Theory of Cultural Linguistics, Gary Palmer explains...person to person and organization to organization. Although analytical frameworks provide a common starting 2Gary B. Palmer, Toward A Theory of Cultural...this point, when overwhelmed, that planners reach out to theory and make determinations based on implicit assumptions and unconscious cognitive biases

  3. Description logic-based methods for auditing frame-based medical terminological systems.

    PubMed

    Cornet, Ronald; Abu-Hanna, Ameen

    2005-07-01

    Medical terminological systems (TSs) play an increasingly important role in health care by supporting recording, retrieval and analysis of patient information. As the size and complexity of TSs are growing, the need arises for means to audit them, i.e. verify and maintain (logical) consistency and (semantic) correctness of their contents. This is not only important for the management of TSs but also for providing their users with confidence about the reliability of their contents. Formal methods have the potential to play an important role in the audit of TSs, although there are few empirical studies to assess the benefits of using these methods. In this paper we propose a method based on description logics (DLs) for the audit of TSs. This method is based on the migration of the medical TS from a frame-based representation to a DL-based one. Our method is characterized by a process in which initially stringent assumptions are made about concept definitions. The assumptions allow the detection of concepts and relations that might comprise a source of logical inconsistency. If the assumptions hold then definitions are to be altered to eliminate the inconsistency, otherwise the assumptions are revised. In order to demonstrate the utility of the approach in a real-world case study we audit a TS in the intensive care domain and discuss decisions pertaining to building DL-based representations. This case study demonstrates that certain types of inconsistencies can indeed be detected by applying the method to a medical terminological system. The added value of the method described in this paper is that it provides a means to evaluate the compliance to a number of common modeling principles in a formal manner. The proposed method reveals potential modeling inconsistencies, helping to audit and (if possible) improve the medical TS. In this way, it contributes to providing confidence in the contents of the terminological system.

  4. Sediment radioisotope dating across a stratigraphic discontinuity in a mining-impacted lake.

    PubMed

    McDonald, C P; Urban, N R

    2007-01-01

    Application of radioisotope sediment dating models to lakes subjected to large anthropogenic sediment inputs can be problematic. As a result of copper mining activities, Torch Lake received large volumes of sediment, the characteristics of which were dramatically different from those of the native sediment. Commonly used dating models (CIC-CSR, CRS) were applied to Torch Lake, but assumptions of these methods are violated, rendering sediment geochronologies inaccurate. A modification was made to the CRS model, utilizing a distinct horizon separating mining from post-mining sediment to differentiate between two focusing regimes. (210)Pb inventories in post-mining sediment were adjusted to correspond to those in mining-era sediment, and a sediment geochronology was established and verified using independent markers in (137)Cs accumulation profiles and core X-rays.

  5. Lagrangian methods for blood damage estimation in cardiovascular devices--How numerical implementation affects the results.

    PubMed

    Marom, Gil; Bluestein, Danny

    2016-01-01

    This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed.

  6. The Applied Behavior Analysis Research Paradigm and Single-Subject Designs in Adapted Physical Activity Research.

    PubMed

    Haegele, Justin A; Hodge, Samuel Russell

    2015-10-01

    There are basic philosophical and paradigmatic assumptions that guide scholarly research endeavors, including the methods used and the types of questions asked. Through this article, kinesiology faculty and students with interests in adapted physical activity are encouraged to understand the basic assumptions of applied behavior analysis (ABA) methodology for conducting, analyzing, and presenting research of high quality in this paradigm. The purposes of this viewpoint paper are to present information fundamental to understanding the assumptions undergirding research methodology in ABA, describe key aspects of single-subject research designs, and discuss common research designs and data-analysis strategies used in single-subject studies.

  7. Effects of fish movement assumptions on the design of a marine protected area to protect an overfished stock.

    PubMed

    Cornejo-Donoso, Jorge; Einarsson, Baldvin; Birnir, Bjorn; Gaines, Steven D

    2017-01-01

    Marine Protected Areas (MPA) are important management tools shown to protect marine organisms, restore biomass, and increase fisheries yields. While MPAs have been successful in meeting these goals for many relatively sedentary species, highly mobile organisms may get few benefits from this type of spatial protection due to their frequent movement outside the protected area. The use of a large MPA can compensate for extensive movement, but testing this empirically is challenging, as it requires both large areas and sufficient time series to draw conclusions. To overcome this limitation, MPA models have been used to identify designs and predict potential outcomes, but these simulations are highly sensitive to the assumptions describing the organism's movements. Due to recent improvements in computational simulations, it is now possible to include very complex movement assumptions in MPA models (e.g. Individual Based Model). These have renewed interest in MPA simulations, which implicitly assume that increasing the detail in fish movement overcomes the sensitivity to the movement assumptions. Nevertheless, a systematic comparison of the designs and outcomes obtained under different movement assumptions has not been done. In this paper, we use an individual based model, interconnected to population and fishing fleet models, to explore the value of increasing the detail of the movement assumptions using four scenarios of increasing behavioral complexity: a) random, diffusive movement, b) aggregations, c) aggregations that respond to environmental forcing (e.g. sea surface temperature), and d) aggregations that respond to environmental forcing and are transported by currents. We then compare these models to determine how the assumptions affect MPA design, and therefore the effective protection of the stocks. Our results show that the optimal MPA size to maximize fisheries benefits increases as movement complexity increases from ~10% for the diffusive assumption to ~30% when full environment forcing was used. We also found that in cases of limited understanding of the movement dynamics of a species, simplified assumptions can be used to provide a guide for the minimum MPA size needed to effectively protect the stock. However, using oversimplified assumptions can produce suboptimal designs and lead to a density underestimation of ca. 30%; therefore, the main value of detailed movement dynamics is to provide more reliable MPA design and predicted outcomes. Large MPAs can be effective in recovering overfished stocks, protect pelagic fish and provide significant increases in fisheries yields. Our models provide a means to empirically test this spatial management tool, which theoretical evidence consistently suggests as an effective alternative to managing highly mobile pelagic stocks.

  8. Latent degradation indicators estimation and prediction: A Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Zhou, Yifan; Sun, Yong; Mathew, Joseph; Wolff, Rodney; Ma, Lin

    2011-01-01

    Asset health inspections can produce two types of indicators: (1) direct indicators (e.g. the thickness of a brake pad, and the crack depth on a gear) which directly relate to a failure mechanism; and (2) indirect indicators (e.g. the indicators extracted from vibration signals and oil analysis data) which can only partially reveal a failure mechanism. While direct indicators enable more precise references to asset health condition, they are often more difficult to obtain than indirect indicators. The state space model provides an efficient approach to estimating direct indicators by using indirect indicators. However, existing state space models to estimate direct indicators largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires fixed inspection intervals. The discrete state assumption entails discretising continuous degradation indicators, which often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This paper proposes a state space model without these assumptions. Monte Carlo-based algorithms are developed to estimate the model parameters and the remaining useful life. These algorithms are evaluated for performance using numerical simulations through MATLAB. The result shows that both the parameters and the remaining useful life are estimated accurately. Finally, the new state space model is used to process vibration and crack depth data from an accelerated test of a gearbox. During this application, the new state space model shows a better fitness result than the state space model with linear and Gaussian assumption.

  9. A rigorous multiple independent binding site model for determining cell-based equilibrium dissociation constants.

    PubMed

    Drake, Andrew W; Klakamp, Scott L

    2007-01-10

    A new 4-parameter nonlinear equation based on the standard multiple independent binding site model (MIBS) is presented for fitting cell-based ligand titration data in order to calculate the ligand/cell receptor equilibrium dissociation constant and the number of receptors/cell. The most commonly used linear (Scatchard Plot) or nonlinear 2-parameter model (a single binding site model found in commercial programs like Prism(R)) used for analysis of ligand/receptor binding data assumes only the K(D) influences the shape of the titration curve. We demonstrate using simulated data sets that, depending upon the cell surface receptor expression level, the number of cells titrated, and the magnitude of the K(D) being measured, this assumption of always being under K(D)-controlled conditions can be erroneous and can lead to unreliable estimates for the binding parameters. We also compare and contrast the fitting of simulated data sets to the commonly used cell-based binding equation versus our more rigorous 4-parameter nonlinear MIBS model. It is shown through these simulations that the new 4-parameter MIBS model, when used for cell-based titrations under optimal conditions, yields highly accurate estimates of all binding parameters and hence should be the preferred model to fit cell-based experimental nonlinear titration data.

  10. The Torsion of Members Having Sections Common in Aircraft Construction

    NASA Technical Reports Server (NTRS)

    Trayer, George W; March, H W

    1930-01-01

    Within recent years a great variety of approximate torsion formulas and drafting-room processes have been advocated. In some of these, especially where mathematical considerations are involved, the results are extremely complex and are not generally intelligible to engineers. The principal object of this investigation was to determine by experiment and theoretical investigation how accurate the more common of these formulas are and on what assumptions they are founded and, if none of the proposed methods proved to be reasonable accurate in practice, to produce simple, practical formulas from reasonably correct assumptions, backed by experiment. A second object was to collect in readily accessible form the most useful of known results for the more common sections. Formulas for all the important solid sections that have yielded to mathematical treatment are listed. Then follows a discussion of the torsion of tubular rods with formulas both rigorous and approximate.

  11. Development of Atmospheric Chemistry-Aerosol Transport Model for Bioavailable Iron From Dust and Combustion Source

    NASA Astrophysics Data System (ADS)

    Ito, A.; Feng, Y.

    2009-12-01

    An accurate prediction of bioavailable iron fraction for ocean biota is hampered by uncertainties in modeling soluble iron fractions in atmospheric aerosols. It has been proposed that atmospheric processing of mineral aerosols by anthropogenic pollutants may be a key pathway to transform insoluble iron into soluble forms. The dissolution of dust minerals strongly depends on solution pH, which is sensitive to the heterogeneous uptake of soluble gases by the dust particle. Due to the complexity, previous model assessments generally use a common assumption in thermodynamical equilibrium between gas and aerosol phases. Here, we compiled an emission inventory of iron from combustion and dust source, and incorporated a dust iron dissolution scheme in a global chemistry-aerosol transport model (IMPACT). We will examine and discuss the uncertainties in estimation of dissolved iron as well as comparisons of the model results with available observations.

  12. Infrared radiation scene generation of stars and planets in celestial background

    NASA Astrophysics Data System (ADS)

    Guo, Feng; Hong, Yaohui; Xu, Xiaojian

    2014-10-01

    An infrared (IR) radiation generation model of stars and planets in celestial background is proposed in this paper. Cohen's spectral template1 is modified for high spectral resolution and accuracy. Based on the improved spectral template for stars and the blackbody assumption for planets, an IR radiation model is developed which is able to generate the celestial IR background for stars and planets appearing in sensor's field of view (FOV) for specified observing date and time, location, viewpoint and spectral band over 1.2μm ~ 35μm. In the current model, the initial locations of stars are calculated based on midcourse space experiment (MSX) IR astronomical catalogue (MSX-IRAC) 2 , while the initial locations of planets are calculated using secular variations of the planetary orbits (VSOP) theory. Simulation results show that the new IR radiation model has higher resolution and accuracy than common model.

  13. Comparison of Dynamic Contrast Enhanced MRI and Quantitative SPECT in a Rat Glioma Model

    PubMed Central

    Skinner, Jack T.; Yankeelov, Thomas E.; Peterson, Todd E.; Does, Mark D.

    2012-01-01

    Pharmacokinetic modeling of dynamic contrast enhanced (DCE)-MRI data provides measures of the extracellular volume fraction (ve) and the volume transfer constant (Ktrans) in a given tissue. These parameter estimates may be biased, however, by confounding issues such as contrast agent and tissue water dynamics, or assumptions of vascularization and perfusion made by the commonly used model. In contrast to MRI, radiotracer imaging with SPECT is insensitive to water dynamics. A quantitative dual-isotope SPECT technique was developed to obtain an estimate of ve in a rat glioma model for comparison to the corresponding estimates obtained using DCE-MRI with a vascular input function (VIF) and reference region model (RR). Both DCE-MRI methods produced consistently larger estimates of ve in comparison to the SPECT estimates, and several experimental sources were postulated to contribute to these differences. PMID:22991315

  14. Variation in and risk factors for paediatric inpatient all-cause mortality in a low income setting: data from an emerging clinical information network.

    PubMed

    Gathara, David; Malla, Lucas; Ayieko, Philip; Karuri, Stella; Nyamai, Rachel; Irimu, Grace; van Hensbroek, Michael Boele; Allen, Elizabeth; English, Mike

    2017-04-05

    Hospital mortality data can inform planning for health interventions and may help optimize resource allocation if they are reliable and appropriately interpreted. However such data are often not available in low income countries including Kenya. Data from the Clinical Information Network covering 12 county hospitals' paediatric admissions aged 2-59 months for the periods September 2013 to March 2015 were used to describe mortality across differing contexts and to explore whether simple clinical characteristics used to classify severity of illness in common treatment guidelines are consistently associated with inpatient mortality. Regression models accounting for hospital identity and malaria prevalence (low or high) were used. Multiple imputation for missing data was based on a missing at random assumption with sensitivity analyses based on pattern mixture missing not at random assumptions. The overall cluster adjusted crude mortality rate across hospitals was 6 · 2% with an almost 5 fold variation across sites (95% CI 4 · 9 to 7 · 8; range 2 · 1% - 11 · 0%). Hospital identity was significantly associated with mortality. Clinical features included in guidelines for common diseases to assess severity of illness were consistently associated with mortality in multivariable analyses (AROC =0 · 86). All-cause mortality is highly variable across hospitals and associated with clinical risk factors identified in disease specific guidelines. A panel of these clinical features may provide a basic common data framework as part of improved health information systems to support evaluations of quality and outcomes of care at scale and inform health system strengthening efforts.

  15. Finite element model predictions of static deformation from dislocation sources in a subduction zone: Sensitivities to homogeneous, isotropic, Poisson-solid, and half-space assumptions

    USGS Publications Warehouse

    Masterlark, Timothy

    2003-01-01

    Dislocation models can simulate static deformation caused by slip along a fault. These models usually take the form of a dislocation embedded in a homogeneous, isotropic, Poisson-solid half-space (HIPSHS). However, the widely accepted HIPSHS assumptions poorly approximate subduction zone systems of converging oceanic and continental crust. This study uses three-dimensional finite element models (FEMs) that allow for any combination (including none) of the HIPSHS assumptions to compute synthetic Green's functions for displacement. Using the 1995 Mw = 8.0 Jalisco-Colima, Mexico, subduction zone earthquake and associated measurements from a nearby GPS array as an example, FEM-generated synthetic Green's functions are combined with standard linear inverse methods to estimate dislocation distributions along the subduction interface. Loading a forward HIPSHS model with dislocation distributions, estimated from FEMs that sequentially relax the HIPSHS assumptions, yields the sensitivity of predicted displacements to each of the HIPSHS assumptions. For the subduction zone models tested and the specific field situation considered, sensitivities to the individual Poisson-solid, isotropy, and homogeneity assumptions can be substantially greater than GPS. measurement uncertainties. Forward modeling quantifies stress coupling between the Mw = 8.0 earthquake and a nearby Mw = 6.3 earthquake that occurred 63 days later. Coulomb stress changes predicted from static HIPSHS models cannot account for the 63-day lag time between events. Alternatively, an FEM that includes a poroelastic oceanic crust, which allows for postseismic pore fluid pressure recovery, can account for the lag time. The pore fluid pressure recovery rate puts an upper limit of 10-17 m2 on the bulk permeability of the oceanic crust. Copyright 2003 by the American Geophysical Union.

  16. Missing CD4+ cell response in randomized clinical trials of maraviroc and dolutegravir.

    PubMed

    Cuffe, Robert; Barnett, Carly; Granier, Catherine; Machida, Mitsuaki; Wang, Cunshan; Roger, James

    2015-10-01

    Missing data can compromise inferences from clinical trials, yet the topic has received little attention in the clinical trial community. Shortcomings in commonly used methods used to analyze studies with missing data (complete case, last- or baseline-observation carried forward) have been highlighted in a recent Food and Drug Administration-sponsored report. This report recommends how to mitigate the issues associated with missing data. We present an example of the proposed concepts using data from recent clinical trials. CD4+ cell count data from the previously reported SINGLE and MOTIVATE studies of dolutegravir and maraviroc were analyzed using a variety of statistical methods to explore the impact of missing data. Four methodologies were used: complete case analysis, simple imputation, mixed models for repeated measures, and multiple imputation. We compared the sensitivity of conclusions to the volume of missing data and to the assumptions underpinning each method. Rates of missing data were greater in the MOTIVATE studies (35%-68% premature withdrawal) than in SINGLE (12%-20%). The sensitivity of results to assumptions about missing data was related to volume of missing data. Estimates of treatment differences by various analysis methods ranged across a 61 cells/mm3 window in MOTIVATE and a 22 cells/mm3 window in SINGLE. Where missing data are anticipated, analyses require robust statistical and clinical debate of the necessary but unverifiable underlying statistical assumptions. Multiple imputation makes these assumptions transparent, can accommodate a broad range of scenarios, and is a natural analysis for clinical trials in HIV with missing data.

  17. Defense and the Economy

    DTIC Science & Technology

    1993-01-01

    Assumptions .......................................................... 15 b. Modeling Productivity ...and a macroeconomic model of the U.S. economy, designed to provide long-range projections 3 consistent with trends in production technology, shifts in...investments in roads, bridges, sewer systems, etc. In addition to these modeling assumptions, we also have introduced productivity increases to reflect the

  18. Impact of one-layer assumption on diffuse reflectance spectroscopy of skin

    NASA Astrophysics Data System (ADS)

    Hennessy, Ricky; Markey, Mia K.; Tunnell, James W.

    2015-02-01

    Diffuse reflectance spectroscopy (DRS) can be used to noninvasively measure skin properties. To extract skin properties from DRS spectra, you need a model that relates the reflectance to the tissue properties. Most models are based on the assumption that skin is homogenous. In reality, skin is composed of multiple layers, and the homogeneity assumption can lead to errors. In this study, we analyze the errors caused by the homogeneity assumption. This is accomplished by creating realistic skin spectra using a computational model, then extracting properties from those spectra using a one-layer model. The extracted parameters are then compared to the parameters used to create the modeled spectra. We used a wavelength range of 400 to 750 nm and a source detector separation of 250 μm. Our results show that use of a one-layer skin model causes underestimation of hemoglobin concentration [Hb] and melanin concentration [mel]. Additionally, the magnitude of the error is dependent on epidermal thickness. The one-layer assumption also causes [Hb] and [mel] to be correlated. Oxygen saturation is overestimated when it is below 50% and underestimated when it is above 50%. We also found that the vessel radius factor used to account for pigment packaging is correlated with epidermal thickness.

  19. A semi-analytical bearing model considering outer race flexibility for model based bearing load monitoring

    NASA Astrophysics Data System (ADS)

    Kerst, Stijn; Shyrokau, Barys; Holweg, Edward

    2018-05-01

    This paper proposes a novel semi-analytical bearing model addressing flexibility of the bearing outer race structure. It furthermore presents the application of this model in a bearing load condition monitoring approach. The bearing model is developed as current computational low cost bearing models fail to provide an accurate description of the more and more common flexible size and weight optimized bearing designs due to their assumptions of rigidity. In the proposed bearing model raceway flexibility is described by the use of static deformation shapes. The excitation of the deformation shapes is calculated based on the modelled rolling element loads and a Fourier series based compliance approximation. The resulting model is computational low cost and provides an accurate description of the rolling element loads for flexible outer raceway structures. The latter is validated by a simulation-based comparison study with a well-established bearing simulation software tool. An experimental study finally shows the potential of the proposed model in a bearing load monitoring approach.

  20. Bayesian model reduction and empirical Bayes for group (DCM) studies

    PubMed Central

    Friston, Karl J.; Litvak, Vladimir; Oswal, Ashwini; Razi, Adeel; Stephan, Klaas E.; van Wijk, Bernadette C.M.; Ziegler, Gabriel; Zeidman, Peter

    2016-01-01

    This technical note describes some Bayesian procedures for the analysis of group studies that use nonlinear models at the first (within-subject) level – e.g., dynamic causal models – and linear models at subsequent (between-subject) levels. Its focus is on using Bayesian model reduction to finesse the inversion of multiple models of a single dataset or a single (hierarchical or empirical Bayes) model of multiple datasets. These applications of Bayesian model reduction allow one to consider parametric random effects and make inferences about group effects very efficiently (in a few seconds). We provide the relatively straightforward theoretical background to these procedures and illustrate their application using a worked example. This example uses a simulated mismatch negativity study of schizophrenia. We illustrate the robustness of Bayesian model reduction to violations of the (commonly used) Laplace assumption in dynamic causal modelling and show how its recursive application can facilitate both classical and Bayesian inference about group differences. Finally, we consider the application of these empirical Bayesian procedures to classification and prediction. PMID:26569570

  1. Aerosol retrieval experiments in the ESA Aerosol_cci project

    NASA Astrophysics Data System (ADS)

    Holzer-Popp, T.; de Leeuw, G.; Martynenko, D.; Klüser, L.; Bevan, S.; Davies, W.; Ducos, F.; Deuzé, J. L.; Graigner, R. G.; Heckel, A.; von Hoyningen-Hüne, W.; Kolmonen, P.; Litvinov, P.; North, P.; Poulsen, C. A.; Ramon, D.; Siddans, R.; Sogacheva, L.; Tanre, D.; Thomas, G. E.; Vountas, M.; Descloitres, J.; Griesfeller, J.; Kinne, S.; Schulz, M.; Pinnock, S.

    2013-03-01

    Within the ESA Climate Change Initiative (CCI) project Aerosol_cci (2010-2013) algorithms for the production of long-term total column aerosol optical depth (AOD) datasets from European Earth Observation sensors are developed. Starting with eight existing pre-cursor algorithms three analysis steps are conducted to improve and qualify the algorithms: (1) a series of experiments applied to one month of global data to understand several major sensitivities to assumptions needed due to the ill-posed nature of the underlying inversion problem, (2) a round robin exercise of "best" versions of each of these algorithms (defined using the step 1 outcome) applied to four months of global data to identify mature algorithms, and (3) a comprehensive validation exercise applied to one complete year of global data produced by the algorithms selected as mature based on the round robin exercise. The algorithms tested included four using AATSR, three using MERIS and one using PARASOL. This paper summarizes the first step. Three experiments were conducted to assess the potential impact of major assumptions in the various aerosol retrieval algorithms. In the first experiment a common set of four aerosol components was used to provide all algorithms with the same assumptions. The second experiment introduced an aerosol property climatology, derived from a combination of model and sun photometer observations, as a priori information in the retrievals on the occurrence of the common aerosol components and their mixing ratios. The third experiment assessed the impact of using a common nadir cloud mask for AATSR and MERIS algorithms in order to characterize the sensitivity to remaining cloud contamination in the retrievals against the baseline dataset versions. The impact of the algorithm changes was assessed for one month (September 2008) of data qualitatively by visible analysis of monthly mean AOD maps and quantitatively by comparing global daily gridded satellite data against daily average AERONET sun photometer observations for the different versions of each algorithm. The analysis allowed an assessment of sensitivities of all algorithms which helped define the best algorithm version for the subsequent round robin exercise; all algorithms (except for MERIS) showed some, in parts significant, improvement. In particular, using common aerosol components and partly also a priori aerosol type climatology is beneficial. On the other hand the use of an AATSR-based common cloud mask meant a clear improvement (though with significant reduction of coverage) for the MERIS standard product, but not for the algorithms using AATSR.

  2. Interacting parasites

    USGS Publications Warehouse

    Lafferty, Kevin D.

    2010-01-01

    Parasitism is the most popular life-style on Earth, and many vertebrates host more than one kind of parasite at a time. A common assumption is that parasite species rarely interact, because they often exploit different tissues in a host, and this use of discrete resources limits competition (1). On page 243 of this issue, however, Telfer et al. (2) provide a convincing case of a highly interactive parasite community in voles, and show how infection with one parasite can affect susceptibility to others. If some human parasites are equally interactive, our current, disease-by-disease approach to modeling and treating infectious diseases is inadequate (3).

  3. Autoinflammation and HLA-B27: More than an Antigen

    PubMed Central

    Sibley, Cailin H.

    2016-01-01

    Spondyloarthritides comprise a group of inflammatory conditions which have in common an association with the MHC class I molecule, HLA-B27. Given this association, these diseases are classically considered disorders of adaptive immunity. However, recent data are challenging this assumption and raising the possibility that innate immunity may play a more prominent role in pathogenesis than previously suspected. In this review, the concept of autoinflammation will be discussed and evidence will be presented from human and animal models to support a critical role for innate immunity in HLA-B27 associated disorders. PMID:27229619

  4. Model of bidirectional reflectance distribution function for metallic materials

    NASA Astrophysics Data System (ADS)

    Wang, Kai; Zhu, Jing-Ping; Liu, Hong; Hou, Xun

    2016-09-01

    Based on the three-component assumption that the reflection is divided into specular reflection, directional diffuse reflection, and ideal diffuse reflection, a bidirectional reflectance distribution function (BRDF) model of metallic materials is presented. Compared with the two-component assumption that the reflection is composed of specular reflection and diffuse reflection, the three-component assumption divides the diffuse reflection into directional diffuse and ideal diffuse reflection. This model effectively resolves the problem that constant diffuse reflection leads to considerable error for metallic materials. Simulation and measurement results validate that this three-component BRDF model can improve the modeling accuracy significantly and describe the reflection properties in the hemisphere space precisely for the metallic materials.

  5. Sci—Fri PM: Topics — 04: What if bystander effects influence cell kill within a target volume? Potential consequences of dose heterogeneity on TCP and EUD on intermediate risk prostate patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balderson, M.J.; Kirkby, C.; Department of Medical Physics, Tom Baker Cancer Centre, Calgary, Alberta

    In vitro evidence has suggested that radiation induced bystander effects may enhance non-local cell killing which may influence radiotherapy treatment planning paradigms. This work applies a bystander effect model, which has been derived from published in vitro data, to calculate equivalent uniform dose (EUD) and tumour control probability (TCP) and compare them with predictions from standard linear quadratic (LQ) models that assume a response due only to local absorbed dose. Comparisons between the models were made under increasing dose heterogeneity scenarios. Dose throughout the CTV was modeled with normal distributions, where the degree of heterogeneity was then dictated by changingmore » the standard deviation (SD). The broad assumptions applied in the bystander effect model are intended to place an upper limit on the extent of the results in a clinical context. The bystander model suggests a moderate degree of dose heterogeneity yields as good or better outcome compared to a uniform dose in terms of EUD and TCP. Intermediate risk prostate prescriptions of 78 Gy over 39 fractions had maximum EUD and TCP values at SD of around 5Gy. The plots only dropped below the uniform dose values for SD ∼ 10 Gy, almost 13% of the prescribed dose. The bystander model demonstrates the potential to deviate from the common local LQ model predictions as dose heterogeneity through a prostate CTV is varies. The results suggest the potential for allowing some degree of dose heterogeneity within a CTV, although further investigations of the assumptions of the bystander model are warranted.« less

  6. Identification of differences in health impact modelling of salt reduction

    PubMed Central

    Geleijnse, Johanna M.; van Raaij, Joop M. A.; Cappuccio, Francesco P.; Cobiac, Linda C.; Scarborough, Peter; Nusselder, Wilma J.; Jaccard, Abbygail; Boshuizen, Hendriek C.

    2017-01-01

    We examined whether specific input data and assumptions explain outcome differences in otherwise comparable health impact assessment models. Seven population health models estimating the impact of salt reduction on morbidity and mortality in western populations were compared on four sets of key features, their underlying assumptions and input data. Next, assumptions and input data were varied one by one in a default approach (the DYNAMO-HIA model) to examine how it influences the estimated health impact. Major differences in outcome were related to the size and shape of the dose-response relation between salt and blood pressure and blood pressure and disease. Modifying the effect sizes in the salt to health association resulted in the largest change in health impact estimates (33% lower), whereas other changes had less influence. Differences in health impact assessment model structure and input data may affect the health impact estimate. Therefore, clearly defined assumptions and transparent reporting for different models is crucial. However, the estimated impact of salt reduction was substantial in all of the models used, emphasizing the need for public health actions. PMID:29182636

  7. Design Considerations for Large Computer Communication Networks,

    DTIC Science & Technology

    1976-04-01

    particular, we will discuss the last three assumptions in order to motivate some of the models to be considered in this chapter. Independence Assumption...channels. fg Part (a), again motivated by an earlier remark on deterministic routing, will become more accurate when we include in the model, based on fixed...hierarchical routing, then this assumption appears to be quite acceptable. Part (b) is motivated by the quite symmetrical structure of the networks considered

  8. "Rewind and Replay:" Changing Teachers' Heterosexist Language to Create an Inclusive Classroom Environment

    ERIC Educational Resources Information Center

    Klein, Nicole Aydt; Markowitz, Linda

    2009-01-01

    Objectives: By completing the "Rewind and Replay" activity, participants will: (1) identify heterosexist language in common classroom interactions, (2) discuss underlying heterosexist assumptions embedded in common teacher statements, (3) brainstorm inclusive terms and expressions for use in place of heterosexist language, and (4) verbally…

  9. Questionable Validity of Poisson Assumptions in a Combined Loglinear/MDS Mapping Model.

    ERIC Educational Resources Information Center

    Gleason, John M.

    1993-01-01

    This response to an earlier article on a combined log-linear/MDS model for mapping journals by citation analysis discusses the underlying assumptions of the Poisson model with respect to characteristics of the citation process. The importance of empirical data analysis is also addressed. (nine references) (LRW)

  10. 26 CFR 1.752-7 - Partnership assumption of partner's § 1.752-7 liability on or after June 24, 2003.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 26 Internal Revenue 8 2014-04-01 2014-04-01 false Partnership assumption of partner's § 1.752-7 liability on or after June 24, 2003. 1.752-7 Section 1.752-7 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Provisions Common to Part II, Subchapter K, Chapter 1 of th...

  11. 26 CFR 1.752-7 - Partnership assumption of partner's § 1.752-7 liability on or after June 24, 2003.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 8 2011-04-01 2011-04-01 false Partnership assumption of partner's § 1.752-7 liability on or after June 24, 2003. 1.752-7 Section 1.752-7 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Provisions Common to Part II, Subchapter K, Chapter 1 of th...

  12. 26 CFR 1.752-7 - Partnership assumption of partner's § 1.752-7 liability on or after June 24, 2003.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 26 Internal Revenue 8 2013-04-01 2013-04-01 false Partnership assumption of partner's § 1.752-7 liability on or after June 24, 2003. 1.752-7 Section 1.752-7 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Provisions Common to Part II, Subchapter K, Chapter 1 of th...

  13. 26 CFR 1.752-7 - Partnership assumption of partner's § 1.752-7 liability on or after June 24, 2003.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 26 Internal Revenue 8 2012-04-01 2012-04-01 false Partnership assumption of partner's § 1.752-7 liability on or after June 24, 2003. 1.752-7 Section 1.752-7 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Provisions Common to Part II, Subchapter K, Chapter 1 of th...

  14. An Exploration of Dental Students' Assumptions About Community-Based Clinical Experiences.

    PubMed

    Major, Nicole; McQuistan, Michelle R

    2016-03-01

    The aim of this study was to ascertain which assumptions dental students recalled feeling prior to beginning community-based clinical experiences and whether those assumptions were fulfilled or challenged. All fourth-year students at the University of Iowa College of Dentistry & Dental Clinics participate in community-based clinical experiences. At the completion of their rotations, they write a guided reflection paper detailing the assumptions they had prior to beginning their rotations and assessing the accuracy of their assumptions. For this qualitative descriptive study, the 218 papers from three classes (2011-13) were analyzed for common themes. The results showed that the students had a variety of assumptions about their rotations. They were apprehensive about working with challenging patients, performing procedures for which they had minimal experience, and working too slowly. In contrast, they looked forward to improving their clinical and patient management skills and knowledge. Other assumptions involved the site (e.g., the equipment/facility would be outdated; protocols/procedures would be similar to the dental school's). Upon reflection, students reported experiences that both fulfilled and challenged their assumptions. Some continued to feel apprehensive about treating certain patient populations, while others found it easier than anticipated. Students were able to treat multiple patients per day, which led to increased speed and patient management skills. However, some reported challenges with time management. Similarly, students were surprised to discover some clinics were new/updated although some had limited instruments and materials. Based on this study's findings about students' recalled assumptions and reflective experiences, educators should consider assessing and addressing their students' assumptions prior to beginning community-based dental education experiences.

  15. Lagrangian methods for blood damage estimation in cardiovascular devices - How numerical implementation affects the results

    PubMed Central

    Marom, Gil; Bluestein, Danny

    2016-01-01

    Summary This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed. PMID:26679833

  16. Diagnosing Diagnostic Models: From Von Neumann's Elephant to Model Equivalencies and Network Psychometrics

    ERIC Educational Resources Information Center

    von Davier, Matthias

    2018-01-01

    This article critically reviews how diagnostic models have been conceptualized and how they compare to other approaches used in educational measurement. In particular, certain assumptions that have been taken for granted and used as defining characteristics of diagnostic models are reviewed and it is questioned whether these assumptions are the…

  17. Data reduction of room tests for zone model validation

    Treesearch

    M. Janssens; H. C. Tran

    1992-01-01

    Compartment fire zone models are based on many simplifying assumptions, in particular that gases stratify in two distinct layers. Because of these assumptions, certain model output is in a form unsuitable for direct comparison to measurements made in full-scale room tests. The experimental data must first be reduced and transformed to be compatible with the model...

  18. Hierarchical Bayesian spatial models for predicting multiple forest variables using waveform LiDAR, hyperspectral imagery, and large inventory datasets

    USGS Publications Warehouse

    Finley, Andrew O.; Banerjee, Sudipto; Cook, Bruce D.; Bradford, John B.

    2013-01-01

    In this paper we detail a multivariate spatial regression model that couples LiDAR, hyperspectral and forest inventory data to predict forest outcome variables at a high spatial resolution. The proposed model is used to analyze forest inventory data collected on the US Forest Service Penobscot Experimental Forest (PEF), ME, USA. In addition to helping meet the regression model's assumptions, results from the PEF analysis suggest that the addition of multivariate spatial random effects improves model fit and predictive ability, compared with two commonly applied modeling approaches. This improvement results from explicitly modeling the covariation among forest outcome variables and spatial dependence among observations through the random effects. Direct application of such multivariate models to even moderately large datasets is often computationally infeasible because of cubic order matrix algorithms involved in estimation. We apply a spatial dimension reduction technique to help overcome this computational hurdle without sacrificing richness in modeling.

  19. Effects of Model Formulation on Estimates of Health in Individual Right Whales (Eubalaena glacialis).

    PubMed

    Schick, Robert S; Kraus, Scott D; Rolland, Rosalind M; Knowlton, Amy R; Hamilton, Philip K; Pettis, Heather M; Thomas, Len; Harwood, John; Clark, James S

    2016-01-01

    Right whales are vulnerable to many sources of anthropogenic disturbance including ship strikes, entanglement with fishing gear, and anthropogenic noise. The effect of these factors on individual health is unclear. A statistical model using photographic evidence of health was recently built to infer the true or hidden health of individual right whales. However, two important prior assumptions about the role of missing data and unexplained variance on the estimates were not previously assessed. Here we tested these factors by varying prior assumptions and model formulation. We found sensitivity to each assumption and used the output to make guidelines on future model formulation.

  20. A Bayesian Multilevel Model for Microcystin Prediction in ...

    EPA Pesticide Factsheets

    The frequency of cyanobacteria blooms in North American lakes is increasing. A major concern with rising cyanobacteria blooms is microcystin, a common cyanobacterial hepatotoxin. To explore the conditions that promote high microcystin concentrations, we analyzed the US EPA National Lake Assessment (NLA) dataset collected in the summer of 2007. The NLA dataset is reported for nine eco-regions. We used the results of random forest modeling as a means ofvariable selection from which we developed a Bayesian multilevel model of microcystin concentrations. Model parameters under a multilevel modeling framework are eco-region specific, butthey are also assumed to be exchangeable across eco-regions for broad continental scaling. The exchangeability assumption ensures that both the common patterns and eco-region specific features will be reflected in the model. Furthermore, the method incorporates appropriate estimates of uncertainty. Our preliminary results show associations between microcystin and turbidity, total nutrients, and N:P ratios. Upon release of a comparable 2012 NLA dataset, we will apply Bayesian updating. The results will help develop management strategies to alleviate microcystin impacts and improve lake quality. This work provides a probabilistic framework for predicting microcystin presences in lakes. It would allow for insights to be made about how changes in nutrient concentrations could potentially change toxin levels.

  1. Model specification in oral health-related quality of life research.

    PubMed

    Kieffer, Jacobien M; Verrips, Erik; Hoogstraten, Johan

    2009-10-01

    The aim of this study was to analyze conventional wisdom regarding the construction and analysis of oral health-related quality of life (OHRQoL) questionnaires and to outline statistical complications. Most methods used for developing and analyzing questionnaires, such as factor analysis and Cronbach's alpha, presume psychological constructs to be latent, inferring a reflective measurement model with the underlying assumption of local independence. Local independence implies that the latent variable explains why the variables observed are related. Many OHRQoL questionnaires are analyzed as if they were based on a reflective measurement model; local independence is thus assumed. This assumption requires these questionnaires to consist solely of items that reflect, instead of determine, OHRQoL. The tenability of this assumption is the main topic of the present study. It is argued that OHRQoL questionnaires are a mix of both a formative measurement model and a reflective measurement model, thus violating the assumption of local independence. The implications are discussed.

  2. Neuronal models in infinite-dimensional spaces and their finite-dimensional projections: Part II.

    PubMed

    Brzychczy, S; Leszczyński, H; Poznanski, R R

    2012-09-01

    Application of comparison theorem is used to examine the validitiy of the "lumped parameter assumption" in describing the behavior of solutions of the continuous cable equation U(t) = DU(xx)+f(U) with the discrete cable equation dV(n)/dt = d*(V(n+1) - 2V(n) + V(n-1)) + f(V(n)), where f is a nonlinear functional describing the internal diffusion of electrical potential in single neurons. While the discrete cable equation looks like a finite difference approximation of the continuous cable equation, solutions of the two reveal significantly different behavior which imply that the compartmental models (spiking neurons) are poor quantifiers of neurons, contrary to what is commonly accepted in computational neuroscience.

  3. A Learning-Based Approach to Reactive Security

    NASA Astrophysics Data System (ADS)

    Barth, Adam; Rubinstein, Benjamin I. P.; Sundararajan, Mukund; Mitchell, John C.; Song, Dawn; Bartlett, Peter L.

    Despite the conventional wisdom that proactive security is superior to reactive security, we show that reactive security can be competitive with proactive security as long as the reactive defender learns from past attacks instead of myopically overreacting to the last attack. Our game-theoretic model follows common practice in the security literature by making worst-case assumptions about the attacker: we grant the attacker complete knowledge of the defender's strategy and do not require the attacker to act rationally. In this model, we bound the competitive ratio between a reactive defense algorithm (which is inspired by online learning theory) and the best fixed proactive defense. Additionally, we show that, unlike proactive defenses, this reactive strategy is robust to a lack of information about the attacker's incentives and knowledge.

  4. Models in biology: ‘accurate descriptions of our pathetic thinking’

    PubMed Central

    2014-01-01

    In this essay I will sketch some ideas for how to think about models in biology. I will begin by trying to dispel the myth that quantitative modeling is somehow foreign to biology. I will then point out the distinction between forward and reverse modeling and focus thereafter on the former. Instead of going into mathematical technicalities about different varieties of models, I will focus on their logical structure, in terms of assumptions and conclusions. A model is a logical machine for deducing the latter from the former. If the model is correct, then, if you believe its assumptions, you must, as a matter of logic, also believe its conclusions. This leads to consideration of the assumptions underlying models. If these are based on fundamental physical laws, then it may be reasonable to treat the model as ‘predictive’, in the sense that it is not subject to falsification and we can rely on its conclusions. However, at the molecular level, models are more often derived from phenomenology and guesswork. In this case, the model is a test of its assumptions and must be falsifiable. I will discuss three models from this perspective, each of which yields biological insights, and this will lead to some guidelines for prospective model builders. PMID:24886484

  5. Roy's specific life values and the philosophical assumption of humanism.

    PubMed

    Hanna, Debra R

    2013-01-01

    Roy's philosophical assumption of humanism, which is shaped by the veritivity assumption, is considered in terms of her specific life values and in contrast to the contemporary view of humanism. Like veritivity, Roy's philosophical assumption of humanism unites a theocentric focus with anthropological values. Roy's perspective enriches the mainly secular, anthropocentric assumption. In this manuscript, the basis for Roy's perspective of humanism will be discussed so that readers will be able to use the Roy adaptation model in an authentic manner.

  6. The Average Hazard Ratio - A Good Effect Measure for Time-to-event Endpoints when the Proportional Hazard Assumption is Violated?

    PubMed

    Rauch, Geraldine; Brannath, Werner; Brückner, Matthias; Kieser, Meinhard

    2018-05-01

    In many clinical trial applications, the endpoint of interest corresponds to a time-to-event endpoint. In this case, group differences are usually expressed by the hazard ratio. Group differences are commonly assessed by the logrank test, which is optimal under the proportional hazard assumption. However, there are many situations in which this assumption is violated. Especially in applications were a full population and several subgroups or a composite time-to-first-event endpoint and several components are considered, the proportional hazard assumption usually does not simultaneously hold true for all test problems under investigation. As an alternative effect measure, Kalbfleisch and Prentice proposed the so-called 'average hazard ratio'. The average hazard ratio is based on a flexible weighting function to modify the influence of time and has a meaningful interpretation even in the case of non-proportional hazards. Despite this favorable property, it is hardly ever used in practice, whereas the standard hazard ratio is commonly reported in clinical trials regardless of whether the proportional hazard assumption holds true or not. There exist two main approaches to construct corresponding estimators and tests for the average hazard ratio where the first relies on weighted Cox regression and the second on a simple plug-in estimator. The aim of this work is to give a systematic comparison of these two approaches and the standard logrank test for different time-toevent settings with proportional and nonproportional hazards and to illustrate the pros and cons in application. We conduct a systematic comparative study based on Monte-Carlo simulations and by a real clinical trial example. Our results suggest that the properties of the average hazard ratio depend on the underlying weighting function. The two approaches to construct estimators and related tests show very similar performance for adequately chosen weights. In general, the average hazard ratio defines a more valid effect measure than the standard hazard ratio under non-proportional hazards and the corresponding tests provide a power advantage over the common logrank test. As non-proportional hazards are often met in clinical practice and the average hazard ratio tests often outperform the common logrank test, this approach should be used more routinely in applications. Schattauer GmbH.

  7. Statistical strategies to quantify respiratory sinus arrhythmia: Are commonly used metrics equivalent?

    PubMed Central

    Lewis, Gregory F.; Furman, Senta A.; McCool, Martha F.; Porges, Stephen W.

    2011-01-01

    Three frequently used RSA metrics are investigated to document violations of assumptions for parametric analyses, moderation by respiration, influences of nonstationarity, and sensitivity to vagal blockade. Although all metrics are highly correlated, new findings illustrate that the metrics are noticeably different on the above dimensions. Only one method conforms to the assumptions for parametric analyses, is not moderated by respiration, is not influenced by nonstationarity, and reliably generates stronger effect sizes. Moreover, this method is also the most sensitive to vagal blockade. Specific features of this method may provide insights into improving the statistical characteristics of other commonly used RSA metrics. These data provide the evidence to question, based on statistical grounds, published reports using particular metrics of RSA. PMID:22138367

  8. Non-stationary hydrologic frequency analysis using B-spline quantile regression

    NASA Astrophysics Data System (ADS)

    Nasri, B.; Bouezmarni, T.; St-Hilaire, A.; Ouarda, T. B. M. J.

    2017-11-01

    Hydrologic frequency analysis is commonly used by engineers and hydrologists to provide the basic information on planning, design and management of hydraulic and water resources systems under the assumption of stationarity. However, with increasing evidence of climate change, it is possible that the assumption of stationarity, which is prerequisite for traditional frequency analysis and hence, the results of conventional analysis would become questionable. In this study, we consider a framework for frequency analysis of extremes based on B-Spline quantile regression which allows to model data in the presence of non-stationarity and/or dependence on covariates with linear and non-linear dependence. A Markov Chain Monte Carlo (MCMC) algorithm was used to estimate quantiles and their posterior distributions. A coefficient of determination and Bayesian information criterion (BIC) for quantile regression are used in order to select the best model, i.e. for each quantile, we choose the degree and number of knots of the adequate B-spline quantile regression model. The method is applied to annual maximum and minimum streamflow records in Ontario, Canada. Climate indices are considered to describe the non-stationarity in the variable of interest and to estimate the quantiles in this case. The results show large differences between the non-stationary quantiles and their stationary equivalents for an annual maximum and minimum discharge with high annual non-exceedance probabilities.

  9. Fast Component Pursuit for Large-Scale Inverse Covariance Estimation.

    PubMed

    Han, Lei; Zhang, Yu; Zhang, Tong

    2016-08-01

    The maximum likelihood estimation (MLE) for the Gaussian graphical model, which is also known as the inverse covariance estimation problem, has gained increasing interest recently. Most existing works assume that inverse covariance estimators contain sparse structure and then construct models with the ℓ 1 regularization. In this paper, different from existing works, we study the inverse covariance estimation problem from another perspective by efficiently modeling the low-rank structure in the inverse covariance, which is assumed to be a combination of a low-rank part and a diagonal matrix. One motivation for this assumption is that the low-rank structure is common in many applications including the climate and financial analysis, and another one is that such assumption can reduce the computational complexity when computing its inverse. Specifically, we propose an efficient COmponent Pursuit (COP) method to obtain the low-rank part, where each component can be sparse. For optimization, the COP method greedily learns a rank-one component in each iteration by maximizing the log-likelihood. Moreover, the COP algorithm enjoys several appealing properties including the existence of an efficient solution in each iteration and the theoretical guarantee on the convergence of this greedy approach. Experiments on large-scale synthetic and real-world datasets including thousands of millions variables show that the COP method is faster than the state-of-the-art techniques for the inverse covariance estimation problem when achieving comparable log-likelihood on test data.

  10. Memory and Common Ground Processes in Language Use

    PubMed Central

    Brown-Schmidt, Sarah; Duff, Melissa C.

    2018-01-01

    During communication, we form assumptions about what our communication partners know and believe. Information that is mutually known between the discourse partners—their common ground—serves as a backdrop for successful communication. Here we present an introduction to the focus of this topic, which is the role of memory in common ground and language use. Two types of questions emerge as central to understanding the relationship between memory and common ground, specifically questions having to do with the representation of common ground in memory, and the use of common ground during language processing. PMID:27797165

  11. Particle precipitation: How the spectrum fit impacts atmospheric chemistry

    NASA Astrophysics Data System (ADS)

    Wissing, J. M.; Nieder, H.; Yakovchouk, O. S.; Sinnhuber, M.

    2016-11-01

    Particle precipitation causes atmospheric ionization. Modeled ionization rates are widely used in atmospheric chemistry/climate simulations of the upper atmosphere. As ionization rates are based on particle measurements some assumptions concerning the energy spectrum are required. While detectors measure particles binned into certain energy ranges only, the calculation of a ionization profile needs a fit for the whole energy spectrum. Therefore the following assumptions are needed: (a) fit function (e.g. power-law or Maxwellian), (b) energy range, (c) amount of segments in the spectral fit, (d) fixed or variable positions of intersections between these segments. The aim of this paper is to quantify the impact of different assumptions on ionization rates as well as their consequences for atmospheric chemistry modeling. As the assumptions about the particle spectrum are independent from the ionization model itself the results of this paper are not restricted to a single ionization model, even though the Atmospheric Ionization Module OSnabrück (AIMOS, Wissing and Kallenrode, 2009) is used here. We include protons only as this allows us to trace changes in the chemistry model directly back to the different assumptions without the need to interpret superposed ionization profiles. However, since every particle species requires a particle spectrum fit with the mentioned assumptions the results are generally applicable to all precipitating particles. The reader may argue that the selection of assumptions of the particle fit is of minor interest, but we would like to emphasize on this topic as it is a major, if not the main, source of discrepancies between different ionization models (and reality). Depending on the assumptions single ionization profiles may vary by a factor of 5, long-term calculations may show systematic over- or underestimation in specific altitudes and even for ideal setups the definition of the energy-range involves an intrinsic 25% uncertainty for the ionization rates. The effects on atmospheric chemistry (HOx, NOx and Ozone) have been calculated by 3dCTM, showing that the spectrum fit is responsible for a 8% variation in Ozone between setups, and even up to 50% for extreme setups.

  12. Comparison of Vehicle Choice Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stephens, Thomas S.; Levinson, Rebecca S.; Brooker, Aaron

    Five consumer vehicle choice models that give projections of future sales shares of light-duty vehicles were compared by running each model using the same inputs, where possible, for two scenarios. The five models compared — LVCFlex, MA3T, LAVE-Trans, ParaChoice, and ADOPT — have been used in support of the Energy Efficiency and Renewable Energy (EERE) Vehicle Technologies Office in analyses of future light-duty vehicle markets under different assumptions about future vehicle technologies and market conditions. The models give projections of sales shares by powertrain technology. Projections made using common, but not identical, inputs showed qualitative agreement, with the exception ofmore » ADOPT. ADOPT estimated somewhat lower advanced vehicle shares, mostly composed of hybrid electric vehicles. Other models projected large shares of multiple advanced vehicle powertrains. Projections of models differed in significant ways, including how different technologies penetrated cars and light trucks. Since the models are constructed differently and take different inputs, not all inputs were identical, but were the same or very similar where possible.« less

  13. Effect of linear and non-linear blade modelling techniques on simulated fatigue and extreme loads using Bladed

    NASA Astrophysics Data System (ADS)

    Beardsell, Alec; Collier, William; Han, Tao

    2016-09-01

    There is a trend in the wind industry towards ever larger and more flexible turbine blades. Blade tip deflections in modern blades now commonly exceed 10% of blade length. Historically, the dynamic response of wind turbine blades has been analysed using linear models of blade deflection which include the assumption of small deflections. For modern flexible blades, this assumption is becoming less valid. In order to continue to simulate dynamic turbine performance accurately, routine use of non-linear models of blade deflection may be required. This can be achieved by representing the blade as a connected series of individual flexible linear bodies - referred to in this paper as the multi-part approach. In this paper, Bladed is used to compare load predictions using single-part and multi-part blade models for several turbines. The study examines the impact on fatigue and extreme loads and blade deflection through reduced sets of load calculations based on IEC 61400-1 ed. 3. Damage equivalent load changes of up to 16% and extreme load changes of up to 29% are observed at some turbine load locations. It is found that there is no general pattern in the loading differences observed between single-part and multi-part blade models. Rather, changes in fatigue and extreme loads with a multi-part blade model depend on the characteristics of the individual turbine and blade. Key underlying causes of damage equivalent load change are identified as differences in edgewise- torsional coupling between the multi-part and single-part models, and increased edgewise rotor mode damping in the multi-part model. Similarly, a causal link is identified between torsional blade dynamics and changes in ultimate load results.

  14. Genetic dissection of the consensus sequence for the class 2 and class 3 flagellar promoters

    PubMed Central

    Wozniak, Christopher E.; Hughes, Kelly T.

    2008-01-01

    Summary Computational searches for DNA binding sites often utilize consensus sequences. These search models make assumptions that the frequency of a base pair in an alignment relates to the base pair’s importance in binding and presume that base pairs contribute independently to the overall interaction with the DNA binding protein. These two assumptions have generally been found to be accurate for DNA binding sites. However, these assumptions are often not satisfied for promoters, which are involved in additional steps in transcription initiation after RNA polymerase has bound to the DNA. To test these assumptions for the flagellar regulatory hierarchy, class 2 and class 3 flagellar promoters were randomly mutagenized in Salmonella. Important positions were then saturated for mutagenesis and compared to scores calculated from the consensus sequence. Double mutants were constructed to determine how mutations combined for each promoter type. Mutations in the binding site for FlhD4C2, the activator of class 2 promoters, better satisfied the assumptions for the binding model than did mutations in the class 3 promoter, which is recognized by the σ28 transcription factor. These in vivo results indicate that the activator sites within flagellar promoters can be modeled using simple assumptions but that the DNA sequences recognized by the flagellar sigma factor require more complex models. PMID:18486950

  15. Sensitivity analysis of pars-tensa young's modulus estimation using inverse finite-element modeling

    NASA Astrophysics Data System (ADS)

    Rohani, S. Alireza; Elfarnawany, Mai; Agrawal, Sumit K.; Ladak, Hanif M.

    2018-05-01

    Accurate estimates of the pars-tensa (PT) Young's modulus (EPT) are required in finite-element (FE) modeling studies of the middle ear. Previously, we introduced an in-situ EPT estimation technique by optimizing a sample-specific FE model to match experimental eardrum pressurization data. This optimization process requires choosing some modeling assumptions such as PT thickness and boundary conditions. These assumptions are reported with a wide range of variation in the literature, hence affecting the reliability of the models. In addition, the sensitivity of the estimated EPT to FE modeling assumptions has not been studied. Therefore, the objective of this study is to identify the most influential modeling assumption on EPT estimates. The middle-ear cavity extracted from a cadaveric temporal bone was pressurized to 500 Pa. The deformed shape of the eardrum after pressurization was measured using a Fourier transform profilometer (FTP). A base-line FE model of the unpressurized middle ear was created. The EPT was estimated using golden section optimization method, which minimizes the cost function comparing the deformed FE model shape to the measured shape after pressurization. The effect of varying the modeling assumptions on EPT estimates were investigated. This included the change in PT thickness, pars flaccida Young's modulus and possible FTP measurement error. The most influential parameter on EPT estimation was PT thickness and the least influential parameter was pars flaccida Young's modulus. The results of this study provide insight into how different parameters affect the results of EPT optimization and which parameters' uncertainties require further investigation to develop robust estimation techniques.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauböck, Michi; Psaltis, Dimitrios; Özel, Feryal, E-mail: mbaubock@email.arizona.edu

    We calculate the effects of spot size on pulse profiles of moderately rotating neutron stars. Specifically, we quantify the bias introduced in radius measurements from the common assumption that spots are infinitesimally small. We find that this assumption is reasonable for spots smaller than 10°–18° and leads to errors that are ≤10% in the radius measurement, depending on the location of the spot and the inclination of the observer. We consider the implications of our results for neutron star radius measurements with the upcoming and planned X-ray missions NICER and LOFT. We calculate the expected spot size for different classesmore » of sources and investigate the circumstances under which the assumption of a small spot is justified.« less

  17. Adaptive windowing and windowless approaches to estimate dynamic functional brain connectivity

    NASA Astrophysics Data System (ADS)

    Yaesoubi, Maziar; Calhoun, Vince D.

    2017-08-01

    In this work, we discuss estimation of dynamic dependence of a multi-variate signal. Commonly used approaches are often based on a locality assumption (e.g. sliding-window) which can miss spontaneous changes due to blurring with local but unrelated changes. We discuss recent approaches to overcome this limitation including 1) a wavelet-space approach, essentially adapting the window to the underlying frequency content and 2) a sparse signal-representation which removes any locality assumption. The latter is especially useful when there is no prior knowledge of the validity of such assumption as in brain-analysis. Results on several large resting-fMRI data sets highlight the potential of these approaches.

  18. Understanding the scale of the single ion free energy: A critical test of the tetra-phenyl arsonium and tetra-phenyl borate assumption

    NASA Astrophysics Data System (ADS)

    Duignan, Timothy T.; Baer, Marcel D.; Mundy, Christopher J.

    2018-06-01

    The tetra-phenyl arsonium and tetra-phenyl borate (TATB) assumption is a commonly used extra-thermodynamic assumption that allows single ion free energies to be split into cationic and anionic contributions. The assumption is that the values for the TATB salt can be divided equally. This is justified by arguing that these large hydrophobic ions will cause a symmetric response in water. Experimental and classical simulation work has raised potential flaws with this assumption, indicating that hydrogen bonding with the phenyl ring may favor the solvation of the TB- anion. Here, we perform ab initio molecular dynamics simulations of these ions in bulk water demonstrating that there are significant structural differences. We quantify our findings by reproducing the experimentally observed vibrational shift for the TB- anion and confirm that this is associated with hydrogen bonding with the phenyl rings. Finally, we demonstrate that this results in a substantial energetic preference of the water to solvate the anion. Our results suggest that the validity of the TATB assumption, which is still widely used today, should be reconsidered experimentally in order to properly reference single ion solvation free energy, enthalpy, and entropy.

  19. Evaluation of 2D shallow-water model for spillway flow with a complex geometry

    USDA-ARS?s Scientific Manuscript database

    Although the two-dimensional (2D) shallow water model is formulated based on several assumptions such as hydrostatic pressure distribution and vertical velocity is negligible, as a simple alternative to the complex 3D model, it has been used to compute water flows in which these assumptions may be ...

  20. Accommodating Missing Data in Mixture Models for Classification by Opinion-Changing Behavior.

    ERIC Educational Resources Information Center

    Hill, Jennifer L.

    2001-01-01

    Explored the assumptions implicit in models reflecting three different approaches to missing survey response data using opinion data collected from Swiss citizens at four time points over nearly 2 years. Results suggest that the latently ignorable model has the least restrictive structural assumptions. Discusses the idea of "durable…

  1. Responses to atmospheric CO2 concentrations in crop simulation models: a review of current simple and semicomplex representations and options for model development.

    PubMed

    Vanuytrecht, Eline; Thorburn, Peter J

    2017-05-01

    Elevated atmospheric CO 2 concentrations ([CO 2 ]) cause direct changes in crop physiological processes (e.g. photosynthesis and stomatal conductance). To represent these CO 2 responses, commonly used crop simulation models have been amended, using simple and semicomplex representations of the processes involved. Yet, there is no standard approach to and often poor documentation of these developments. This study used a bottom-up approach (starting with the APSIM framework as case study) to evaluate modelled responses in a consortium of commonly used crop models and illuminate whether variation in responses reflects true uncertainty in our understanding compared to arbitrary choices of model developers. Diversity in simulated CO 2 responses and limited validation were common among models, both within the APSIM framework and more generally. Whereas production responses show some consistency up to moderately high [CO 2 ] (around 700 ppm), transpiration and stomatal responses vary more widely in nature and magnitude (e.g. a decrease in stomatal conductance varying between 35% and 90% among models was found for [CO 2 ] doubling to 700 ppm). Most notably, nitrogen responses were found to be included in few crop models despite being commonly observed and critical for the simulation of photosynthetic acclimation, crop nutritional quality and carbon allocation. We suggest harmonization and consideration of more mechanistic concepts in particular subroutines, for example, for the simulation of N dynamics, as a way to improve our predictive understanding of CO 2 responses and capture secondary processes. Intercomparison studies could assist in this aim, provided that they go beyond simple output comparison and explicitly identify the representations and assumptions that are causal for intermodel differences. Additionally, validation and proper documentation of the representation of CO 2 responses within models should be prioritized. © 2017 John Wiley & Sons Ltd.

  2. In the Opponent's Shoes: Increasing the Behavioral Validity of Attackers' Judgments in Counterterrorism Models.

    PubMed

    Sri Bhashyam, Sumitra; Montibeller, Gilberto

    2016-04-01

    A key objective for policymakers and analysts dealing with terrorist threats is trying to predict the actions that malicious agents may take. A recent trend in counterterrorism risk analysis is to model the terrorists' judgments, as these will guide their choices of such actions. The standard assumptions in most of these models are that terrorists are fully rational, following all the normative desiderata required for rational choices, such as having a set of constant and ordered preferences, being able to perform a cost-benefit analysis of their alternatives, among many others. However, are such assumptions reasonable from a behavioral perspective? In this article, we analyze the types of assumptions made across various counterterrorism analytical models that represent malicious agents' judgments and discuss their suitability from a descriptive point of view. We then suggest how some of these assumptions could be modified to describe terrorists' preferences more accurately, by drawing knowledge from the fields of behavioral decision research, politics, philosophy of choice, public choice, and conflict management in terrorism. Such insight, we hope, might help make the assumptions of these models more behaviorally valid for counterterrorism risk analysis. © 2016 The Authors Wound Repair and Regeneration published by Wiley Periodicals, Inc. on behalf of The Wound Healing Society.

  3. Exploring the Estimation of Examinee Locations Using Multidimensional Latent Trait Models under Different Distributional Assumptions

    ERIC Educational Resources Information Center

    Jang, Hyesuk

    2014-01-01

    This study aims to evaluate a multidimensional latent trait model to determine how well the model works in various empirical contexts. Contrary to the assumption of these latent trait models that the traits are normally distributed, situations in which the latent trait is not shaped with a normal distribution may occur (Sass et al, 2008; Woods…

  4. An experimental comparison of several current viscoplastic constitutive models at elevated temperature

    NASA Technical Reports Server (NTRS)

    James, G. H.; Imbrie, P. K.; Hill, P. S.; Allen, D. H.; Haisler, W. E.

    1988-01-01

    Four current viscoplastic models are compared experimentally for Inconel 718 at 593 C. This material system responds with apparent negative strain rate sensitivity, undergoes cyclic work softening, and is susceptible to low cycle fatigue. A series of tests were performed to create a data base from which to evaluate material constants. A method to evaluate the constants is developed which draws on common assumptions for this type of material, recent advances by other researchers, and iterative techniques. A complex history test, not used in calculating the constants, is then used to compare the predictive capabilities of the models. The combination of exponentially based inelastic strain rate equations and dynamic recovery is shown to model this material system with the greatest success. The method of constant calculation developed was successfully applied to the complex material response encountered. Backstress measuring tests were found to be invaluable and to warrant further development.

  5. A rigorous approach to investigating common assumptions about disease transmission: Process algebra as an emerging modelling methodology for epidemiology.

    PubMed

    McCaig, Chris; Begon, Mike; Norman, Rachel; Shankland, Carron

    2011-03-01

    Changing scale, for example, the ability to move seamlessly from an individual-based model to a population-based model, is an important problem in many fields. In this paper, we introduce process algebra as a novel solution to this problem in the context of models of infectious disease spread. Process algebra allows us to describe a system in terms of the stochastic behaviour of individuals, and is a technique from computer science. We review the use of process algebra in biological systems, and the variety of quantitative and qualitative analysis techniques available. The analysis illustrated here solves the changing scale problem: from the individual behaviour we can rigorously derive equations to describe the mean behaviour of the system at the level of the population. The biological problem investigated is the transmission of infection, and how this relates to individual interactions.

  6. Warped linear mixed models for the genetic analysis of transformed phenotypes

    PubMed Central

    Fusi, Nicolo; Lippert, Christoph; Lawrence, Neil D.; Stegle, Oliver

    2014-01-01

    Linear mixed models (LMMs) are a powerful and established tool for studying genotype–phenotype relationships. A limitation of the LMM is that the model assumes Gaussian distributed residuals, a requirement that rarely holds in practice. Violations of this assumption can lead to false conclusions and loss in power. To mitigate this problem, it is common practice to pre-process the phenotypic values to make them as Gaussian as possible, for instance by applying logarithmic or other nonlinear transformations. Unfortunately, different phenotypes require different transformations, and choosing an appropriate transformation is challenging and subjective. Here we present an extension of the LMM that estimates an optimal transformation from the observed data. In simulations and applications to real data from human, mouse and yeast, we show that using transformations inferred by our model increases power in genome-wide association studies and increases the accuracy of heritability estimation and phenotype prediction. PMID:25234577

  7. Warped linear mixed models for the genetic analysis of transformed phenotypes.

    PubMed

    Fusi, Nicolo; Lippert, Christoph; Lawrence, Neil D; Stegle, Oliver

    2014-09-19

    Linear mixed models (LMMs) are a powerful and established tool for studying genotype-phenotype relationships. A limitation of the LMM is that the model assumes Gaussian distributed residuals, a requirement that rarely holds in practice. Violations of this assumption can lead to false conclusions and loss in power. To mitigate this problem, it is common practice to pre-process the phenotypic values to make them as Gaussian as possible, for instance by applying logarithmic or other nonlinear transformations. Unfortunately, different phenotypes require different transformations, and choosing an appropriate transformation is challenging and subjective. Here we present an extension of the LMM that estimates an optimal transformation from the observed data. In simulations and applications to real data from human, mouse and yeast, we show that using transformations inferred by our model increases power in genome-wide association studies and increases the accuracy of heritability estimation and phenotype prediction.

  8. Collaborative Modeling: Experience of the U.S. Preventive Services Task Force.

    PubMed

    Petitti, Diana B; Lin, Jennifer S; Owens, Douglas K; Croswell, Jennifer M; Feuer, Eric J

    2018-01-01

    Models can be valuable tools to address uncertainty, trade-offs, and preferences when trying to understand the effects of interventions. Availability of results from two or more independently developed models that examine the same question (comparative modeling) allows systematic exploration of differences between models and the effect of these differences on model findings. Guideline groups sometimes commission comparative modeling to support their recommendation process. In this commissioned collaborative modeling, modelers work with the people who are developing a recommendation or policy not only to define the questions to be addressed but ideally, work side-by-side with each other and with systematic reviewers to standardize selected inputs and incorporate selected common assumptions. This paper describes the use of commissioned collaborative modeling by the U.S. Preventive Services Task Force (USPSTF), highlighting the general challenges and opportunities encountered and specific challenges for some topics. It delineates other approaches to use modeling to support evidence-based recommendations and the many strengths of collaborative modeling compared with other approaches. Unlike systematic reviews prepared for the USPSTF, the commissioned collaborative modeling reports used by the USPSTF in making recommendations about screening have not been required to follow a common format, sometimes making it challenging to understand key model features. This paper presents a checklist developed to critically appraise commissioned collaborative modeling reports about cancer screening topics prepared for the USPSTF. Copyright © 2017 American Journal of Preventive Medicine. All rights reserved.

  9. The four-principle formulation of common morality is at the core of bioethics mediation method.

    PubMed

    Ahmadi Nasab Emran, Shahram

    2015-08-01

    Bioethics mediation is increasingly used as a method in clinical ethics cases. My goal in this paper is to examine the implicit theoretical assumptions of the bioethics mediation method developed by Dubler and Liebman. According to them, the distinguishing feature of bioethics mediation is that the method is useful in most cases of clinical ethics in which conflict is the main issue, which implies that there is either no real ethical issue or if there were, they are not the key to finding a resolution. I question the tacit assumption of non-normativity of the mediation method in bioethics by examining the various senses in which bioethics mediation might be non-normative or neutral. The major normative assumption of the mediation method is the existence of common morality. In addition, the four-principle formulation of the theory articulated by Beauchamp and Childress implicitly provides the normative content for the method. Full acknowledgement of the theoretical and normative assumptions of bioethics mediation helps clinical ethicists better understand the nature of their job. In addition, the need for a robust philosophical background even in what appears to be a purely practical method of mediation cannot be overemphasized. Acknowledgement of the normative nature of bioethics mediation method necessitates a more critical attitude of the bioethics mediators towards the norms they usually take for granted uncritically as valid.

  10. HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.

    PubMed

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2011-01-01

    The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.

  11. Deflection Shape Reconstructions of a Rotating Five-blade Helicopter Rotor from TLDV Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fioretti, A.; Castellini, P.; Tomasini, E. P.

    2010-05-28

    Helicopters are aircraft machines which are subjected to high level of vibrations, mainly due to spinning rotors. These are made of two or more blades attached by hinges to a central hub, which can make the dynamic behaviour difficult to study. However, they share some common dynamic properties with the ones expected in bladed discs, thereby the analytical modelling of rotors can be performed using some assumptions as the ones adopted for the bladed discs. This paper presents results of a vibrations study performed on a scaled helicopter rotor model which was rotating at a fix rotational speed and excitedmore » by an air jet. A simplified analytical model of that rotor was also produced to help the identifications of the vibration patterns measured using a single point tracking-SLDV measurement method.« less

  12. Modelling Framework and Assistive Device for Peripheral Intravenous Injections

    NASA Astrophysics Data System (ADS)

    Kam, Kin F.; Robinson, Martin P.; Gilbert, Mathew A.; Pelah, Adar

    2016-02-01

    Intravenous access for blood sampling or drug administration that requires peripheral venepuncture is perhaps the most common invasive procedure practiced in hospitals, clinics and general practice surgeries.We describe an idealised mathematical framework for modelling the dynamics of the peripheral venepuncture process. Basic assumptions of the model are confirmed through motion analysis of needle trajectories during venepuncture, taken from video recordings of a skilled practitioner injecting into a practice kit. The framework is also applied to the design and construction of a proposed device for accurate needle guidance during venepuncture administration, assessed as consistent and repeatable in application and does not lead to over puncture. The study provides insights into the ubiquitous peripheral venepuncture process and may contribute to applications in training and in the design of new devices, including for use in robotic automation.

  13. Is herpes zoster vaccination likely to be cost-effective in Canada?

    PubMed

    Peden, Alexander D; Strobel, Stephenson B; Forget, Evelyn L

    2014-05-30

    To synthesize the current literature detailing the cost-effectiveness of the herpes zoster (HZ) vaccine, and to provide Canadian policy-makers with cost-effectiveness measurements in a Canadian context. This article builds on an existing systematic review of the HZ vaccine that offers a quality assessment of 11 recent articles. We first replicated this study, and then two assessors reviewed the articles and extracted information on vaccine effectiveness, cost of HZ, other modelling assumptions and QALY estimates. Then we transformed the results into a format useful for Canadian policy decisions. Results expressed in different currencies from different years were converted into 2012 Canadian dollars using Bank of Canada exchange rates and a Consumer Price Index deflator. Modelling assumptions that varied between studies were synthesized. We tabled the results for comparability. The Szucs systematic review presented a thorough methodological assessment of the relevant literature. However, the various studies presented results in a variety of currencies, and based their analyses on disparate methodological assumptions. Most of the current literature uses Markov chain models to estimate HZ prevalence. Cost assumptions, discount rate assumptions, assumptions about vaccine efficacy and waning and epidemiological assumptions drove variation in the outcomes. This article transforms the results into a table easily understood by policy-makers. The majority of the current literature shows that HZ vaccination is cost-effective at the price of $100,000 per QALY. Few studies showed that vaccination cost-effectiveness was higher than this threshold, and only under conservative assumptions. Cost-effectiveness was sensitive to vaccine price and discount rate.

  14. The Emotions as a Culture-Common Framework of Motivational Experiences and Communicative Cues.

    ERIC Educational Resources Information Center

    Izard, Carroll E.

    Several important conclusions follow from the assumptions that the fundamental emotions are (a) innate, universal phenomena, and (b) the components of man's principal motivation system. All people have in the fundamental emotions the capacity for a common set of subjective experiences and expressions. These have a special communication value. The…

  15. Retrieval of Polar Stratospheric Cloud Microphysical Properties from Lidar Measurements: Dependence on Particle Shape Assumptions

    NASA Technical Reports Server (NTRS)

    Reichardt, J.; Reichardt, S.; Yang, P.; McGee, T. J.; Bhartia, P. K. (Technical Monitor)

    2001-01-01

    A retrieval algorithm has been developed for the microphysical analysis of polar stratospheric cloud (PSC) optical data obtained using lidar instrumentation. The parameterization scheme of the PSC microphysical properties allows for coexistence of up to three different particle types with size-dependent shapes. The finite difference time domain (FDTD) method has been used to calculate optical properties of particles with maximum dimensions equal to or less than 2 mu m and with shapes that can be considered more representative of PSCs on the scale of individual crystals than the commonly assumed spheroids. Specifically. these are irregular and hexagonal crystals. Selection of the optical parameters that are input to the inversion algorithm is based on a potential data set such as that gathered by two of the lidars on board the NASA DC-8 during the Stratospheric Aerosol and Gas Experiment 0 p (SAGE) Ozone Loss Validation experiment (SOLVE) campaign in winter 1999/2000: the Airborne Raman Ozone and Temperature Lidar (AROTEL) and the NASA Langley Differential Absorption Lidar (DIAL). The 0 microphysical retrieval algorithm has been applied to study how particle shape assumptions affect the inversion of lidar data measured in leewave PSCs. The model simulations show that under the assumption of spheroidal particle shapes, PSC surface and volume density are systematically smaller than the FDTD-based values by, respectively, approximately 10-30% and approximately 5-23%.

  16. A heteroscedastic generalized linear model with a non-normal speed factor for responses and response times.

    PubMed

    Molenaar, Dylan; Bolsinova, Maria

    2017-05-01

    In generalized linear modelling of responses and response times, the observed response time variables are commonly transformed to make their distribution approximately normal. A normal distribution for the transformed response times is desirable as it justifies the linearity and homoscedasticity assumptions in the underlying linear model. Past research has, however, shown that the transformed response times are not always normal. Models have been developed to accommodate this violation. In the present study, we propose a modelling approach for responses and response times to test and model non-normality in the transformed response times. Most importantly, we distinguish between non-normality due to heteroscedastic residual variances, and non-normality due to a skewed speed factor. In a simulation study, we establish parameter recovery and the power to separate both effects. In addition, we apply the model to a real data set. © 2017 The Authors. British Journal of Mathematical and Statistical Psychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.

  17. Evaluation of spatio-temporal Bayesian models for the spread of infectious diseases in oil palm.

    PubMed

    Denis, Marie; Cochard, Benoît; Syahputra, Indra; de Franqueville, Hubert; Tisné, Sébastien

    2018-02-01

    In the field of epidemiology, studies are often focused on mapping diseases in relation to time and space. Hierarchical modeling is a common flexible and effective tool for modeling problems related to disease spread. In the context of oil palm plantations infected by the fungal pathogen Ganoderma boninense, we propose and compare two spatio-temporal hierarchical Bayesian models addressing the lack of information on propagation modes and transmission vectors. We investigate two alternative process models to study the unobserved mechanism driving the infection process. The models help gain insight into the spatio-temporal dynamic of the infection by identifying a genetic component in the disease spread and by highlighting a spatial component acting at the end of the experiment. In this challenging context, we propose models that provide assumptions on the unobserved mechanism driving the infection process while making short-term predictions using ready-to-use software. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Utility-free heuristic models of two-option choice can mimic predictions of utility-stage models under many conditions

    PubMed Central

    Piantadosi, Steven T.; Hayden, Benjamin Y.

    2015-01-01

    Economists often model choices as if decision-makers assign each option a scalar value variable, known as utility, and then select the option with the highest utility. It remains unclear whether as-if utility models describe real mental and neural steps in choice. Although choices alone cannot prove the existence of a utility stage, utility transformations are often taken to provide the most parsimonious or psychologically plausible explanation for choice data. Here, we show that it is possible to mathematically transform a large set of common utility-stage two-option choice models (specifically ones in which dimensions are can be decomposed into additive functions) into a heuristic model (specifically, a dimensional prioritization heuristic) that has no utility computation stage. We then show that under a range of plausible assumptions, both classes of model predict similar neural responses. These results highlight the difficulties in using neuroeconomic data to infer the existence of a value stage in choice. PMID:25914613

  19. GEM-CEDAR Study of Ionospheric Energy Input and Joule Dissipation

    NASA Technical Reports Server (NTRS)

    Rastaetter, Lutz; Kuznetsova, Maria M.; Shim, Jasoon

    2012-01-01

    We are studying ionospheric model performance for six events selected for the GEM-CEDAR modeling challenge. DMSP measurements of electric and magnetic fields are converted into Poynting Flux values that estimate the energy input into the ionosphere. Models generate rates of ionospheric Joule dissipation that are compared to the energy influx. Models include the ionosphere models CTIPe and Weimer and the ionospheric electrodynamic outputs of global magnetosphere models SWMF, LFM, and OpenGGCM. This study evaluates the model performance in terms of overall balance between energy influx and dissipation and tests the assumption that Joule dissipation occurs locally where electromagnetic energy flux enters the ionosphere. We present results in terms of skill scores now commonly used in metrics and validation studies and we can measure the agreement in terms of temporal and spatial distribution of dissipation (i.e, location of auroral activity) along passes of the DMSP satellite with the passes' proximity to the magnetic pole and solar wind activity level.

  20. Comparison of the binary logistic and skewed logistic (Scobit) models of injury severity in motor vehicle collisions.

    PubMed

    Tay, Richard

    2016-03-01

    The binary logistic model has been extensively used to analyze traffic collision and injury data where the outcome of interest has two categories. However, the assumption of a symmetric distribution may not be a desirable property in some cases, especially when there is a significant imbalance in the two categories of outcome. This study compares the standard binary logistic model with the skewed logistic model in two cases in which the symmetry assumption is violated in one but not the other case. The differences in the estimates, and thus the marginal effects obtained, are significant when the assumption of symmetry is violated. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Technoeconomic assumptions adopted for the development of a long-term electricity supply model for Cyprus.

    PubMed

    Taliotis, Constantinos; Taibi, Emanuele; Howells, Mark; Rogner, Holger; Bazilian, Morgan; Welsch, Manuel

    2017-10-01

    The generation mix of Cyprus has been dominated by oil products for decades. In order to conform with European Union and international legislation, a transformation of the supply system is called for. Energy system models can facilitate energy planning into the future, but a large volume of data is required to populate such models. The present data article provides information on key modelling assumptions and input data adopted with the aim of representing the electricity supply system of Cyprus in a separate research article. Data in regards to renewable energy technoeconomic characteristics and investment cost projections, fossil fuel price projections, storage technology characteristics and system operation assumptions are described in this article.

  2. Population Health in Canada: A Brief Critique

    PubMed Central

    Coburn, David; Denny, Keith; Mykhalovskiy, Eric; McDonough, Peggy; Robertson, Ann; Love, Rhonda

    2003-01-01

    An internationally influential model of population health was developed in Canada in the 1990s, shifting the research agenda beyond health care to the social and economic determinants of health. While agreeing that health has important social determinants, the authors believe that this model has serious shortcomings; they critique the model by focusing on its hidden assumptions. Assumptions about how knowledge is produced and an implicit interest group perspective exclude the sociopolitical and class contexts that shape interest group power and citizen health. Overly rationalist assumptions about change understate the role of agency. The authors review the policy and practice implications of the Canadian population health model and point to alternative ways of viewing the determinants of health. PMID:12604479

  3. Analyses of School Commuting Data for Exposure Modeling Purposes

    EPA Science Inventory

    Human exposure models often make the simplifying assumption that school children attend school in the same Census tract where they live. This paper analyzes that assumption and provides information on the temporal and spatial distributions associated with school commuting. The d...

  4. Sensitivity of Rooftop PV Projections in the SunShot Vision Study to Market Assumptions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drury, E.; Denholm, P.; Margolis, R.

    2013-01-01

    The SunShot Vision Study explored the potential growth of solar markets if solar prices decreased by about 75% from 2010 to 2020. The SolarDS model was used to simulate rooftop PV demand for this study, based on several PV market assumptions--future electricity rates, customer access to financing, and others--in addition to the SunShot PV price projections. This paper finds that modeled PV demand is highly sensitive to several non-price market assumptions, particularly PV financing parameters.

  5. Climate fails to predict wood decomposition at regional scales

    NASA Astrophysics Data System (ADS)

    Bradford, Mark A.; Warren, Robert J., II; Baldrian, Petr; Crowther, Thomas W.; Maynard, Daniel S.; Oldfield, Emily E.; Wieder, William R.; Wood, Stephen A.; King, Joshua R.

    2014-07-01

    Decomposition of organic matter strongly influences ecosystem carbon storage. In Earth-system models, climate is a predominant control on the decomposition rates of organic matter. This assumption is based on the mean response of decomposition to climate, yet there is a growing appreciation in other areas of global change science that projections based on mean responses can be irrelevant and misleading. We test whether climate controls on the decomposition rate of dead wood--a carbon stock estimated to represent 73 +/- 6 Pg carbon globally--are sensitive to the spatial scale from which they are inferred. We show that the common assumption that climate is a predominant control on decomposition is supported only when local-scale variation is aggregated into mean values. Disaggregated data instead reveal that local-scale factors explain 73% of the variation in wood decomposition, and climate only 28%. Further, the temperature sensitivity of decomposition estimated from local versus mean analyses is 1.3-times greater. Fundamental issues with mean correlations were highlighted decades ago, yet mean climate-decomposition relationships are used to generate simulations that inform management and adaptation under environmental change. Our results suggest that to predict accurately how decomposition will respond to climate change, models must account for local-scale factors that control regional dynamics.

  6. A review of selected inorganic surface water quality-monitoring practices: are we really measuring what we think, and if so, are we doing it right?

    USGS Publications Warehouse

    Horowitz, Arthur J.

    2013-01-01

    Successful environmental/water quality-monitoring programs usually require a balance between analytical capabilities, the collection and preservation of representative samples, and available financial/personnel resources. Due to current economic conditions, monitoring programs are under increasing pressure to do more with less. Hence, a review of current sampling and analytical methodologies, and some of the underlying assumptions that form the bases for these programs seems appropriate, to see if they are achieving their intended objectives within acceptable error limits and/or measurement uncertainty, in a cost-effective manner. That evaluation appears to indicate that several common sampling/processing/analytical procedures (e.g., dip (point) samples/measurements, nitrogen determinations, total recoverable analytical procedures) are generating biased or nonrepresentative data, and that some of the underlying assumptions relative to current programs, such as calendar-based sampling and stationarity are no longer defensible. The extensive use of statistical models as well as surrogates (e.g., turbidity) also needs to be re-examined because the hydrologic interrelationships that support their use tend to be dynamic rather than static. As a result, a number of monitoring programs may need redesigning, some sampling and analytical procedures may need to be updated, and model/surrogate interrelationships may require recalibration.

  7. A sup-score test for the cure fraction in mixture models for long-term survivors.

    PubMed

    Hsu, Wei-Wen; Todem, David; Kim, KyungMann

    2016-12-01

    The evaluation of cure fractions in oncology research under the well known cure rate model has attracted considerable attention in the literature, but most of the existing testing procedures have relied on restrictive assumptions. A common assumption has been to restrict the cure fraction to a constant under alternatives to homogeneity, thereby neglecting any information from covariates. This article extends the literature by developing a score-based statistic that incorporates covariate information to detect cure fractions, with the existing testing procedure serving as a special case. A complication of this extension, however, is that the implied hypotheses are not typical and standard regularity conditions to conduct the test may not even hold. Using empirical processes arguments, we construct a sup-score test statistic for cure fractions and establish its limiting null distribution as a functional of mixtures of chi-square processes. In practice, we suggest a simple resampling procedure to approximate this limiting distribution. Our simulation results show that the proposed test can greatly improve efficiency over tests that neglect the heterogeneity of the cure fraction under the alternative. The practical utility of the methodology is illustrated using ovarian cancer survival data with long-term follow-up from the surveillance, epidemiology, and end results registry. © 2016, The International Biometric Society.

  8. A test of the critical assumption of the sensory bias model for the evolution of female mating preference using neural networks.

    PubMed

    Fuller, Rebecca C

    2009-07-01

    The sensory bias model for the evolution of mating preferences states that mating preferences evolve as correlated responses to selection on nonmating behaviors sharing a common sensory system. The critical assumption is that pleiotropy creates genetic correlations that affect the response to selection. I simulated selection on populations of neural networks to test this. First, I selected for various combinations of foraging and mating preferences. Sensory bias predicts that populations with preferences for like-colored objects (red food and red mates) should evolve more readily than preferences for differently colored objects (red food and blue mates). Here, I found no evidence for sensory bias. The responses to selection on foraging and mating preferences were independent of one another. Second, I selected on foraging preferences alone and asked whether there were correlated responses for increased mating preferences for like-colored mates. Here, I found modest evidence for sensory bias. Selection for a particular foraging preference resulted in increased mating preference for similarly colored mates. However, the correlated responses were small and inconsistent. Selection on foraging preferences alone may affect initial levels of mating preferences, but these correlations did not constrain the joint evolution of foraging and mating preferences in these simulations.

  9. Modeling thermal infrared (2-14 micrometer) reflectance spectra of frost and snow

    NASA Technical Reports Server (NTRS)

    Wald, Andrew E.

    1994-01-01

    Existing theories of radiative transfer in close-packed media assume that each particle scatters independently of its neighbors. For opaque particles, such as are common in the thermal infrared, this assumption is not valid, and these radiative transfer theories will not be accurate. A new method is proposed, called 'diffraction subtraction', which modifies the scattering cross section of close-packed large, opaque spheres to account for the effect of close packing on the diffraction cross section of a scattering particle. This method predicts the thermal infrared reflectance of coarse (greater than 50 micrometers radius), disaggregated granular snow. However, such coarse snow is typically old and metamorphosed, with adjacent grains welded together. The reflectance of such a welded block can be described as partly Fresnel in nature and cannot be predicted using Mie inputs to radiative transfer theory. Owing to the high absorption coefficient of ice in the thermal infrared, a rough surface reflectance model can be used to calculate reflectance from such a block. For very small (less than 50 micrometers), disaggregated particles, it is incorrect in principle to treat diffraction independently of reflection and refraction, and the theory fails. However, for particles larger than 50 micrometers, independent scattering is a valid assumption, and standard radiative transfer theory works.

  10. Does Gene Tree Discordance Explain the Mismatch between Macroevolutionary Models and Empirical Patterns of Tree Shape and Branching Times?

    PubMed Central

    Stadler, Tanja; Degnan, James H.; Rosenberg, Noah A.

    2016-01-01

    Classic null models for speciation and extinction give rise to phylogenies that differ in distribution from empirical phylogenies. In particular, empirical phylogenies are less balanced and have branching times closer to the root compared to phylogenies predicted by common null models. This difference might be due to null models of the speciation and extinction process being too simplistic, or due to the empirical datasets not being representative of random phylogenies. A third possibility arises because phylogenetic reconstruction methods often infer gene trees rather than species trees, producing an incongruity between models that predict species tree patterns and empirical analyses that consider gene trees. We investigate the extent to which the difference between gene trees and species trees under a combined birth–death and multispecies coalescent model can explain the difference in empirical trees and birth–death species trees. We simulate gene trees embedded in simulated species trees and investigate their difference with respect to tree balance and branching times. We observe that the gene trees are less balanced and typically have branching times closer to the root than the species trees. Empirical trees from TreeBase are also less balanced than our simulated species trees, and model gene trees can explain an imbalance increase of up to 8% compared to species trees. However, we see a much larger imbalance increase in empirical trees, about 100%, meaning that additional features must also be causing imbalance in empirical trees. This simulation study highlights the necessity of revisiting the assumptions made in phylogenetic analyses, as these assumptions, such as equating the gene tree with the species tree, might lead to a biased conclusion. PMID:26968785

  11. Common-sense chemistry: The use of assumptions and heuristics in problem solving

    NASA Astrophysics Data System (ADS)

    Maeyer, Jenine Rachel

    Students experience difficulty learning and understanding chemistry at higher levels, often because of cognitive biases stemming from common sense reasoning constraints. These constraints can be divided into two categories: assumptions (beliefs held about the world around us) and heuristics (the reasoning strategies or rules used to build predictions and make decisions). A better understanding and characterization of these constraints are of central importance in the development of curriculum and teaching strategies that better support student learning in science. It was the overall goal of this thesis to investigate student reasoning in chemistry, specifically to better understand and characterize the assumptions and heuristics used by undergraduate chemistry students. To achieve this, two mixed-methods studies were conducted, each with quantitative data collected using a questionnaire and qualitative data gathered through semi-structured interviews. The first project investigated the reasoning heuristics used when ranking chemical substances based on the relative value of a physical or chemical property, while the second study characterized the assumptions and heuristics used when making predictions about the relative likelihood of different types of chemical processes. Our results revealed that heuristics for cue selection and decision-making played a significant role in the construction of answers during the interviews. Many study participants relied frequently on one or more of the following heuristics to make their decisions: recognition, representativeness, one-reason decision-making, and arbitrary trend. These heuristics allowed students to generate answers in the absence of requisite knowledge, but often led students astray. When characterizing assumptions, our results indicate that students relied on intuitive, spurious, and valid assumptions about the nature of chemical substances and processes in building their responses. In particular, many interviewees seemed to view chemical reactions as macroscopic reassembling processes where favorability was related to the perceived ease with which reactants broke apart or products formed. Students also expressed spurious chemical assumptions based on the misinterpretation and overgeneralization of periodicity and electronegativity. Our findings suggest the need to create more opportunities for college chemistry students to monitor their thinking, develop and apply analytical ways of reasoning, and evaluate the effectiveness of shortcut reasoning procedures in different contexts.

  12. How biological background assumptions influence scientific risk evaluation of stacked genetically modified plants: an analysis of research hypotheses and argumentations.

    PubMed

    Rocca, Elena; Andersen, Fredrik

    2017-08-14

    Scientific risk evaluations are constructed by specific evidence, value judgements and biological background assumptions. The latter are the framework-setting suppositions we apply in order to understand some new phenomenon. That background assumptions co-determine choice of methodology, data interpretation, and choice of relevant evidence is an uncontroversial claim in modern basic science. Furthermore, it is commonly accepted that, unless explicated, disagreements in background assumptions can lead to misunderstanding as well as miscommunication. Here, we extend the discussion on background assumptions from basic science to the debate over genetically modified (GM) plants risk assessment. In this realm, while the different political, social and economic values are often mentioned, the identity and role of background assumptions at play are rarely examined. We use an example from the debate over risk assessment of stacked genetically modified plants (GM stacks), obtained by applying conventional breeding techniques to GM plants. There are two main regulatory practices of GM stacks: (i) regulate as conventional hybrids and (ii) regulate as new GM plants. We analyzed eight papers representative of these positions and found that, in all cases, additional premises are needed to reach the stated conclusions. We suggest that these premises play the role of biological background assumptions and argue that the most effective way toward a unified framework for risk analysis and regulation of GM stacks is by explicating and examining the biological background assumptions of each position. Once explicated, it is possible to either evaluate which background assumptions best reflect contemporary biological knowledge, or to apply Douglas' 'inductive risk' argument.

  13. Kinetic limitations on tracer partitioning in ganglia dominated source zones.

    PubMed

    Ervin, Rhiannon E; Boroumand, Ali; Abriola, Linda M; Ramsburg, C Andrew

    2011-11-01

    Quantification of the relationship between dense nonaqueous phase liquid (DNAPL) source strength, source longevity and spatial distribution is increasingly recognized as important for effective remedial design. Partitioning tracers are one tool that may permit interrogation of DNAPL architecture. Tracer data are commonly analyzed under the assumption of linear, equilibrium partitioning, although the appropriateness of these assumptions has not been fully explored. Here we focus on elucidating the nonlinear and nonequilibrium partitioning behavior of three selected alcohol tracers - 1-pentanol, 1-hexanol and 2-octanol in a series of batch and column experiments. Liquid-liquid equilibria for systems comprising water, TCE and the selected alcohol illustrate the nonlinear distribution of alcohol between the aqueous and organic phases. Complete quantification of these equilibria facilitates delineation of the limits of applicability of the linear partitioning assumption, and assessment of potential inaccuracies associated with measurement of partition coefficients at a single concentration. Column experiments were conducted under conditions of non-equilibrium to evaluate the kinetics of the reversible absorption of the selected tracers in a sandy medium containing a uniform entrapped saturation of TCE-DNAPL. Experimental tracer breakthrough data were used, in conjunction with mathematical models and batch measurements, to evaluate alternative hypotheses for observed deviations from linear equilibrium partitioning behavior. Analyses suggest that, although all tracers accumulate at the TCE-DNAPL/aqueous interface, surface accumulation does not influence transport at concentrations typically employed for tracer tests. Moreover, results reveal that the kinetics of the reversible absorption process are well described using existing mass transfer correlations originally developed to model aqueous boundary layer resistance for pure-component NAPL dissolution. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. THE MODELING OF THE FATE AND TRANSPORT OF ENVIRONMENTAL POLLUTANTS

    EPA Science Inventory

    Current models that predict the fate of organic compounds released to the environment are based on the assumption that these compounds exist exclusively as neutral species. This assumption is untrue under many environmental conditions, as some molecules can exist as cations, anio...

  15. An empirical comparison of statistical tests for assessing the proportional hazards assumption of Cox's model.

    PubMed

    Ng'andu, N H

    1997-03-30

    In the analysis of survival data using the Cox proportional hazard (PH) model, it is important to verify that the explanatory variables analysed satisfy the proportional hazard assumption of the model. This paper presents results of a simulation study that compares five test statistics to check the proportional hazard assumption of Cox's model. The test statistics were evaluated under proportional hazards and the following types of departures from the proportional hazard assumption: increasing relative hazards; decreasing relative hazards; crossing hazards; diverging hazards, and non-monotonic hazards. The test statistics compared include those based on partitioning of failure time and those that do not require partitioning of failure time. The simulation results demonstrate that the time-dependent covariate test, the weighted residuals score test and the linear correlation test have equally good power for detection of non-proportionality in the varieties of non-proportional hazards studied. Using illustrative data from the literature, these test statistics performed similarly.

  16. The effect of solution nonideality on modeling transmembrane water transport and diffusion-limited intracellular ice formation during cryopreservation

    NASA Astrophysics Data System (ADS)

    Zhao, Gang; Takamatsu, Hiroshi; He, Xiaoming

    2014-04-01

    A new model was developed to predict transmembrane water transport and diffusion-limited ice formation in cells during freezing without the ideal-solution assumption that has been used in previous models. The model was applied to predict cell dehydration and intracellular ice formation (IIF) during cryopreservation of mouse oocytes and bovine carotid artery endothelial cells in aqueous sodium chloride (NaCl) solution with glycerol as the cryoprotectant or cryoprotective agent. A comparison of the predictions between the present model and the previously reported models indicated that the ideal-solution assumption results in under-prediction of the amount of intracellular ice at slow cooling rates (<50 K/min). In addition, the lower critical cooling rates for IIF that is lethal to cells predicted by the present model were much lower than those estimated with the ideal-solution assumption. This study represents the first investigation on how accounting for solution nonideality in modeling water transport across the cell membrane could affect the prediction of diffusion-limited ice formation in biological cells during freezing. Future studies are warranted to look at other assumptions alongside nonideality to further develop the model as a useful tool for optimizing the protocol of cell cryopreservation for practical applications.

  17. The effect of solution nonideality on modeling transmembrane water transport and diffusion-limited intracellular ice formation during cryopreservation.

    PubMed

    Zhao, Gang; Takamatsu, Hiroshi; He, Xiaoming

    2014-04-14

    A new model was developed to predict transmembrane water transport and diffusion-limited ice formation in cells during freezing without the ideal-solution assumption that has been used in previous models. The model was applied to predict cell dehydration and intracellular ice formation (IIF) during cryopreservation of mouse oocytes and bovine carotid artery endothelial cells in aqueous sodium chloride (NaCl) solution with glycerol as the cryoprotectant or cryoprotective agent. A comparison of the predictions between the present model and the previously reported models indicated that the ideal-solution assumption results in under-prediction of the amount of intracellular ice at slow cooling rates (<50 K/min). In addition, the lower critical cooling rates for IIF that is lethal to cells predicted by the present model were much lower than those estimated with the ideal-solution assumption. This study represents the first investigation on how accounting for solution nonideality in modeling water transport across the cell membrane could affect the prediction of diffusion-limited ice formation in biological cells during freezing. Future studies are warranted to look at other assumptions alongside nonideality to further develop the model as a useful tool for optimizing the protocol of cell cryopreservation for practical applications.

  18. Learning to Predict Combinatorial Structures

    NASA Astrophysics Data System (ADS)

    Vembu, Shankar

    2009-12-01

    The major challenge in designing a discriminative learning algorithm for predicting structured data is to address the computational issues arising from the exponential size of the output space. Existing algorithms make different assumptions to ensure efficient, polynomial time estimation of model parameters. For several combinatorial structures, including cycles, partially ordered sets, permutations and other graph classes, these assumptions do not hold. In this thesis, we address the problem of designing learning algorithms for predicting combinatorial structures by introducing two new assumptions: (i) The first assumption is that a particular counting problem can be solved efficiently. The consequence is a generalisation of the classical ridge regression for structured prediction. (ii) The second assumption is that a particular sampling problem can be solved efficiently. The consequence is a new technique for designing and analysing probabilistic structured prediction models. These results can be applied to solve several complex learning problems including but not limited to multi-label classification, multi-category hierarchical classification, and label ranking.

  19. I Assumed You Knew: Teaching Assumptions as Co-Equal to Observations in Scientific Work

    NASA Astrophysics Data System (ADS)

    Horodyskyj, L.; Mead, C.; Anbar, A. D.

    2016-12-01

    Introductory science curricula typically begin with a lesson on the "nature of science". Usually this lesson is short, built with the assumption that students have picked up this information elsewhere and only a short review is necessary. However, when asked about the nature of science in our classes, student definitions were often confused, contradictory, or incomplete. A cursory review of how the nature of science is defined in a number of textbooks is similarly inconsistent and excessively loquacious. With such confusion both from the student and teacher perspective, it is no surprise that students walk away with significant misconceptions about the scientific endeavor, which they carry with them into public life. These misconceptions subsequently result in poor public policy and personal decisions on issues with scientific underpinnings. We will present a new way of teaching the nature of science at the introductory level that better represents what we actually do as scientists. Nature of science lessons often emphasize the importance of observations in scientific work. However, they rarely mention and often hide the importance of assumptions in interpreting those observations. Assumptions are co-equal to observations in building models, which are observation-assumption networks that can be used to make predictions about future observations. The confidence we place in these models depends on whether they are assumption-dominated (hypothesis) or observation-dominated (theory). By presenting and teaching science in this manner, we feel that students will better comprehend the scientific endeavor, since making observations and assumptions and building mental models is a natural human behavior. We will present a model for a science lab activity that can be taught using this approach.

  20. Systematic review of statistical approaches to quantify, or correct for, measurement error in a continuous exposure in nutritional epidemiology.

    PubMed

    Bennett, Derrick A; Landry, Denise; Little, Julian; Minelli, Cosetta

    2017-09-19

    Several statistical approaches have been proposed to assess and correct for exposure measurement error. We aimed to provide a critical overview of the most common approaches used in nutritional epidemiology. MEDLINE, EMBASE, BIOSIS and CINAHL were searched for reports published in English up to May 2016 in order to ascertain studies that described methods aimed to quantify and/or correct for measurement error for a continuous exposure in nutritional epidemiology using a calibration study. We identified 126 studies, 43 of which described statistical methods and 83 that applied any of these methods to a real dataset. The statistical approaches in the eligible studies were grouped into: a) approaches to quantify the relationship between different dietary assessment instruments and "true intake", which were mostly based on correlation analysis and the method of triads; b) approaches to adjust point and interval estimates of diet-disease associations for measurement error, mostly based on regression calibration analysis and its extensions. Two approaches (multiple imputation and moment reconstruction) were identified that can deal with differential measurement error. For regression calibration, the most common approach to correct for measurement error used in nutritional epidemiology, it is crucial to ensure that its assumptions and requirements are fully met. Analyses that investigate the impact of departures from the classical measurement error model on regression calibration estimates can be helpful to researchers in interpreting their findings. With regard to the possible use of alternative methods when regression calibration is not appropriate, the choice of method should depend on the measurement error model assumed, the availability of suitable calibration study data and the potential for bias due to violation of the classical measurement error model assumptions. On the basis of this review, we provide some practical advice for the use of methods to assess and adjust for measurement error in nutritional epidemiology.

Top