Sample records for specific modeling assumptions

  1. Roy's specific life values and the philosophical assumption of humanism.

    PubMed

    Hanna, Debra R

    2013-01-01

    Roy's philosophical assumption of humanism, which is shaped by the veritivity assumption, is considered in terms of her specific life values and in contrast to the contemporary view of humanism. Like veritivity, Roy's philosophical assumption of humanism unites a theocentric focus with anthropological values. Roy's perspective enriches the mainly secular, anthropocentric assumption. In this manuscript, the basis for Roy's perspective of humanism will be discussed so that readers will be able to use the Roy adaptation model in an authentic manner.

  2. Using "Excel" for White's Test--An Important Technique for Evaluating the Equality of Variance Assumption and Model Specification in a Regression Analysis

    ERIC Educational Resources Information Center

    Berenson, Mark L.

    2013-01-01

    There is consensus in the statistical literature that severe departures from its assumptions invalidate the use of regression modeling for purposes of inference. The assumptions of regression modeling are usually evaluated subjectively through visual, graphic displays in a residual analysis but such an approach, taken alone, may be insufficient…

  3. Improving Baseline Model Assumptions: Evaluating the Impacts of Typical Methodological Approaches in Watershed Models

    NASA Astrophysics Data System (ADS)

    Muenich, R. L.; Kalcic, M. M.; Teshager, A. D.; Long, C. M.; Wang, Y. C.; Scavia, D.

    2017-12-01

    Thanks to the availability of open-source software, online tutorials, and advanced software capabilities, watershed modeling has expanded its user-base and applications significantly in the past thirty years. Even complicated models like the Soil and Water Assessment Tool (SWAT) are being used and documented in hundreds of peer-reviewed publications each year, and likely more applied in practice. These models can help improve our understanding of present, past, and future conditions, or analyze important "what-if" management scenarios. However, baseline data and methods are often adopted and applied without rigorous testing. In multiple collaborative projects, we have evaluated the influence of some of these common approaches on model results. Specifically, we examined impacts of baseline data and assumptions involved in manure application, combined sewer overflows, and climate data incorporation across multiple watersheds in the Western Lake Erie Basin. In these efforts, we seek to understand the impact of using typical modeling data and assumptions, versus using improved data and enhanced assumptions on model outcomes and thus ultimately, study conclusions. We provide guidance for modelers as they adopt and apply data and models for their specific study region. While it is difficult to quantitatively assess the full uncertainty surrounding model input data and assumptions, recognizing the impacts of model input choices is important when considering actions at the both the field and watershed scales.

  4. A closure test for time-specific capture-recapture data

    USGS Publications Warehouse

    Stanley, T.R.; Burnham, K.P.

    1999-01-01

    The assumption of demographic closure in the analysis of capture-recapture data under closed-population models is of fundamental importance. Yet, little progress has been made in the development of omnibus tests of the closure assumption. We present a closure test for time-specific data that, in principle, tests the null hypothesis of closed-population model M(t) against the open-population Jolly-Seber model as a specific alternative. This test is chi-square, and can be decomposed into informative components that can be interpreted to determine the nature of closure violations. The test is most sensitive to permanent emigration and least sensitive to temporary emigration, and is of intermediate sensitivity to permanent or temporary immigration. This test is a versatile tool for testing the assumption of demographic closure in the analysis of capture-recapture data.

  5. Independent Review of Simulation of Net Infiltration for Present-Day and Potential Future Climates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Review Panel: Soroosh Sorooshian, Ph.D., Panel Chairperson, University of California, Irvine; Jan M. H. Hendrickx, Ph.D., New Mexico Institute of Mining and Technology; Binayak P. Mohanty, Ph.D., Texas A&M University

    The DOE Office of Civilian Radioactive Waste Management (OCRWM) tasked Oak Ridge Institute for Science and Education (ORISE) with providing an independent expert review of the documented model and prediction results for net infiltration of water into the unsaturated zone at Yucca Mountain. The specific purpose of the model, as documented in the report MDL-NBS-HS-000023, Rev. 01, is “to provide a spatial representation, including epistemic and aleatory uncertainty, of the predicted mean annual net infiltration at the Yucca Mountain site ...” (p. 1-1) The expert review panel assembled by ORISE concluded that the model report does not provide a technicallymore » credible spatial representation of net infiltration at Yucca Mountain. Specifically, the ORISE Review Panel found that: • A critical lack of site-specific meteorological, surface, and subsurface information prevents verification of (i) the net infiltration estimates, (ii) the uncertainty estimates of parameters caused by their spatial variability, and (iii) the assumptions used by the modelers (ranges and distributions) for the characterization of parameters. The paucity of site-specific data used by the modeling team for model implementation and validation is a major deficiency in this effort. • The model does not incorporate at least one potentially important hydrologic process. Subsurface lateral flow is not accounted for by the model, and the assumption that the effect of subsurface lateral flow is negligible is not adequately justified. This issue is especially critical for the wetter climate periods. This omission may be one reason the model results appear to underestimate net infiltration beneath wash environments and therefore imprecisely represent the spatial variability of net infiltration. • While the model uses assumptions consistently, such as uniform soil depths and a constant vegetation rooting depth, such assumptions may not be appropriate for this net infiltration simulation because they oversimplify a complex landscape and associated hydrologic processes, especially since the model assumptions have not been adequately corroborated by field and laboratory observations at Yucca Mountain.« less

  6. Using effort information with change-in-ratio data for population estimation

    USGS Publications Warehouse

    Udevitz, Mark S.; Pollock, Kenneth H.

    1995-01-01

    Most change-in-ratio (CIR) methods for estimating fish and wildlife population sizes have been based only on assumptions about how encounter probabilities vary among population subclasses. When information on sampling effort is available, it is also possible to derive CIR estimators based on assumptions about how encounter probabilities vary over time. This paper presents a generalization of previous CIR models that allows explicit consideration of a range of assumptions about the variation of encounter probabilities among subclasses and over time. Explicit estimators are derived under this model for specific sets of assumptions about the encounter probabilities. Numerical methods are presented for obtaining estimators under the full range of possible assumptions. Likelihood ratio tests for these assumptions are described. Emphasis is on obtaining estimators based on assumptions about variation of encounter probabilities over time.

  7. Individual Change and the Timing and Onset of Important Life Events: Methods, Models, and Assumptions

    ERIC Educational Resources Information Center

    Grimm, Kevin; Marcoulides, Katerina

    2016-01-01

    Researchers are often interested in studying how the timing of a specific event affects concurrent and future development. When faced with such research questions there are multiple statistical models to consider and those models are the focus of this paper as well as their theoretical underpinnings and assumptions regarding the nature of the…

  8. Computational models of basal-ganglia pathway functions: focus on functional neuroanatomy

    PubMed Central

    Schroll, Henning; Hamker, Fred H.

    2013-01-01

    Over the past 15 years, computational models have had a considerable impact on basal-ganglia research. Most of these models implement multiple distinct basal-ganglia pathways and assume them to fulfill different functions. As there is now a multitude of different models, it has become complex to keep track of their various, sometimes just marginally different assumptions on pathway functions. Moreover, it has become a challenge to oversee to what extent individual assumptions are corroborated or challenged by empirical data. Focusing on computational, but also considering non-computational models, we review influential concepts of pathway functions and show to what extent they are compatible with or contradict each other. Moreover, we outline how empirical evidence favors or challenges specific model assumptions and propose experiments that allow testing assumptions against each other. PMID:24416002

  9. Instrumental variable specifications and assumptions for longitudinal analysis of mental health cost offsets.

    PubMed

    O'Malley, A James

    2012-12-01

    Instrumental variables (IVs) enable causal estimates in observational studies to be obtained in the presence of unmeasured confounders. In practice, a diverse range of models and IV specifications can be brought to bear on a problem, particularly with longitudinal data where treatment effects can be estimated for various functions of current and past treatment. However, in practice the empirical consequences of different assumptions are seldom examined, despite the fact that IV analyses make strong assumptions that cannot be conclusively tested by the data. In this paper, we consider several longitudinal models and specifications of IVs. Methods are applied to data from a 7-year study of mental health costs of atypical and conventional antipsychotics whose purpose was to evaluate whether the newer and more expensive atypical antipsychotic medications lead to a reduction in overall mental health costs.

  10. Interpreting "Personality" Taxonomies: Why Previous Models Cannot Capture Individual-Specific Experiencing, Behaviour, Functioning and Development. Major Taxonomic Tasks Still Lay Ahead.

    PubMed

    Uher, Jana

    2015-12-01

    As science seeks to make generalisations, a science of individual peculiarities encounters intricate challenges. This article explores these challenges by applying the Transdisciplinary Philosophy-of-Science Paradigm for Research on Individuals (TPS-Paradigm) and by exploring taxonomic "personality" research as an example. Analyses of researchers' interpretations of the taxonomic "personality" models, constructs and data that have been generated in the field reveal widespread erroneous assumptions about the abilities of previous methodologies to appropriately represent individual-specificity in the targeted phenomena. These assumptions, rooted in everyday thinking, fail to consider that individual-specificity and others' minds cannot be directly perceived, that abstract descriptions cannot serve as causal explanations, that between-individual structures cannot be isomorphic to within-individual structures, and that knowledge of compositional structures cannot explain the process structures of their functioning and development. These erroneous assumptions and serious methodological deficiencies in widely used standardised questionnaires have effectively prevented psychologists from establishing taxonomies that can comprehensively model individual-specificity in most of the kinds of phenomena explored as "personality", especially in experiencing and behaviour and in individuals' functioning and development. Contrary to previous assumptions, it is not universal models but rather different kinds of taxonomic models that are required for each of the different kinds of phenomena, variations and structures that are commonly conceived of as "personality". Consequently, to comprehensively explore individual-specificity, researchers have to apply a portfolio of complementary methodologies and develop different kinds of taxonomies, most of which have yet to be developed. Closing, the article derives some meta-desiderata for future research on individuals' "personality".

  11. Lagrangian methods for blood damage estimation in cardiovascular devices--How numerical implementation affects the results.

    PubMed

    Marom, Gil; Bluestein, Danny

    2016-01-01

    This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed.

  12. Collective behaviour in vertebrates: a sensory perspective

    PubMed Central

    Collignon, Bertrand; Fernández-Juricic, Esteban

    2016-01-01

    Collective behaviour models can predict behaviours of schools, flocks, and herds. However, in many cases, these models make biologically unrealistic assumptions in terms of the sensory capabilities of the organism, which are applied across different species. We explored how sensitive collective behaviour models are to these sensory assumptions. Specifically, we used parameters reflecting the visual coverage and visual acuity that determine the spatial range over which an individual can detect and interact with conspecifics. Using metric and topological collective behaviour models, we compared the classic sensory parameters, typically used to model birds and fish, with a set of realistic sensory parameters obtained through physiological measurements. Compared with the classic sensory assumptions, the realistic assumptions increased perceptual ranges, which led to fewer groups and larger group sizes in all species, and higher polarity values and slightly shorter neighbour distances in the fish species. Overall, classic visual sensory assumptions are not representative of many species showing collective behaviour and constrain unrealistically their perceptual ranges. More importantly, caution must be exercised when empirically testing the predictions of these models in terms of choosing the model species, making realistic predictions, and interpreting the results. PMID:28018616

  13. Plant ecosystem responses to rising atmospheric CO2: applying a "two-timing" approach to assess alternative hypotheses for mechanisms of nutrient limitation

    NASA Astrophysics Data System (ADS)

    Medlyn, B.; Jiang, M.; Zaehle, S.

    2017-12-01

    There is now ample experimental evidence that the response of terrestrial vegetation to rising atmospheric CO2 concentration is modified by soil nutrient availability. How to represent nutrient cycling processes is thus a key consideration for vegetation models. We have previously used model intercomparison to demonstrate that models incorporating different assumptions predict very different responses at Free-Air CO2 Enrichment experiments. Careful examination of model outputs has provided some insight into the reasons for the different model outcomes, but it is difficult to attribute outcomes to specific assumptions. Here we investigate the impact of individual assumptions in a generic plant carbon-nutrient cycling model. The G'DAY (Generic Decomposition And Yield) model is modified to incorporate alternative hypotheses for nutrient cycling. We analyse the impact of these assumptions in the model using a simple analytical approach known as "two-timing". This analysis identifies the quasi-equilibrium behaviour of the model at the time scales of the component pools. The analysis provides a useful mathematical framework for probing model behaviour and identifying the most critical assumptions for experimental study.

  14. Lagrangian methods for blood damage estimation in cardiovascular devices - How numerical implementation affects the results

    PubMed Central

    Marom, Gil; Bluestein, Danny

    2016-01-01

    Summary This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed. PMID:26679833

  15. Judging Statistical Models of Individual Decision Making under Risk Using In- and Out-of-Sample Criteria

    PubMed Central

    Drichoutis, Andreas C.; Lusk, Jayson L.

    2014-01-01

    Despite the fact that conceptual models of individual decision making under risk are deterministic, attempts to econometrically estimate risk preferences require some assumption about the stochastic nature of choice. Unfortunately, the consequences of making different assumptions are, at present, unclear. In this paper, we compare three popular error specifications (Fechner, contextual utility, and Luce error) for three different preference functionals (expected utility, rank-dependent utility, and a mixture of those two) using in- and out-of-sample selection criteria. We find drastically different inferences about structural risk preferences across the competing functionals and error specifications. Expected utility theory is least affected by the selection of the error specification. A mixture model combining the two conceptual models assuming contextual utility provides the best fit of the data both in- and out-of-sample. PMID:25029467

  16. Judging statistical models of individual decision making under risk using in- and out-of-sample criteria.

    PubMed

    Drichoutis, Andreas C; Lusk, Jayson L

    2014-01-01

    Despite the fact that conceptual models of individual decision making under risk are deterministic, attempts to econometrically estimate risk preferences require some assumption about the stochastic nature of choice. Unfortunately, the consequences of making different assumptions are, at present, unclear. In this paper, we compare three popular error specifications (Fechner, contextual utility, and Luce error) for three different preference functionals (expected utility, rank-dependent utility, and a mixture of those two) using in- and out-of-sample selection criteria. We find drastically different inferences about structural risk preferences across the competing functionals and error specifications. Expected utility theory is least affected by the selection of the error specification. A mixture model combining the two conceptual models assuming contextual utility provides the best fit of the data both in- and out-of-sample.

  17. Determining informative priors for cognitive models.

    PubMed

    Lee, Michael D; Vanpaemel, Wolf

    2018-02-01

    The development of cognitive models involves the creative scientific formalization of assumptions, based on theory, observation, and other relevant information. In the Bayesian approach to implementing, testing, and using cognitive models, assumptions can influence both the likelihood function of the model, usually corresponding to assumptions about psychological processes, and the prior distribution over model parameters, usually corresponding to assumptions about the psychological variables that influence those processes. The specification of the prior is unique to the Bayesian context, but often raises concerns that lead to the use of vague or non-informative priors in cognitive modeling. Sometimes the concerns stem from philosophical objections, but more often practical difficulties with how priors should be determined are the stumbling block. We survey several sources of information that can help to specify priors for cognitive models, discuss some of the methods by which this information can be formalized in a prior distribution, and identify a number of benefits of including informative priors in cognitive modeling. Our discussion is based on three illustrative cognitive models, involving memory retention, categorization, and decision making.

  18. Model specification in oral health-related quality of life research.

    PubMed

    Kieffer, Jacobien M; Verrips, Erik; Hoogstraten, Johan

    2009-10-01

    The aim of this study was to analyze conventional wisdom regarding the construction and analysis of oral health-related quality of life (OHRQoL) questionnaires and to outline statistical complications. Most methods used for developing and analyzing questionnaires, such as factor analysis and Cronbach's alpha, presume psychological constructs to be latent, inferring a reflective measurement model with the underlying assumption of local independence. Local independence implies that the latent variable explains why the variables observed are related. Many OHRQoL questionnaires are analyzed as if they were based on a reflective measurement model; local independence is thus assumed. This assumption requires these questionnaires to consist solely of items that reflect, instead of determine, OHRQoL. The tenability of this assumption is the main topic of the present study. It is argued that OHRQoL questionnaires are a mix of both a formative measurement model and a reflective measurement model, thus violating the assumption of local independence. The implications are discussed.

  19. Modeling Differential Item Functioning Using a Generalization of the Multiple-Group Bifactor Model

    ERIC Educational Resources Information Center

    Jeon, Minjeong; Rijmen, Frank; Rabe-Hesketh, Sophia

    2013-01-01

    The authors present a generalization of the multiple-group bifactor model that extends the classical bifactor model for categorical outcomes by relaxing the typical assumption of independence of the specific dimensions. In addition to the means and variances of all dimensions, the correlations among the specific dimensions are allowed to differ…

  20. Identification of differences in health impact modelling of salt reduction

    PubMed Central

    Geleijnse, Johanna M.; van Raaij, Joop M. A.; Cappuccio, Francesco P.; Cobiac, Linda C.; Scarborough, Peter; Nusselder, Wilma J.; Jaccard, Abbygail; Boshuizen, Hendriek C.

    2017-01-01

    We examined whether specific input data and assumptions explain outcome differences in otherwise comparable health impact assessment models. Seven population health models estimating the impact of salt reduction on morbidity and mortality in western populations were compared on four sets of key features, their underlying assumptions and input data. Next, assumptions and input data were varied one by one in a default approach (the DYNAMO-HIA model) to examine how it influences the estimated health impact. Major differences in outcome were related to the size and shape of the dose-response relation between salt and blood pressure and blood pressure and disease. Modifying the effect sizes in the salt to health association resulted in the largest change in health impact estimates (33% lower), whereas other changes had less influence. Differences in health impact assessment model structure and input data may affect the health impact estimate. Therefore, clearly defined assumptions and transparent reporting for different models is crucial. However, the estimated impact of salt reduction was substantial in all of the models used, emphasizing the need for public health actions. PMID:29182636

  1. Cocirculation of infectious diseases on networks

    NASA Astrophysics Data System (ADS)

    Miller, Joel C.

    2013-06-01

    We consider multiple diseases spreading in a static configuration model network. We make standard assumptions that infection transmits from neighbor to neighbor at a disease-specific rate and infected individuals recover at a disease-specific rate. Infection by one disease confers immediate and permanent immunity to infection by any disease. Under these assumptions, we find a simple, low-dimensional ordinary differential equations model which captures the global dynamics of the infection. The dynamics depend strongly on initial conditions. Although we motivate this Rapid Communication with infectious disease, the model may be adapted to the spread of other infectious agents such as competing political beliefs, or adoption of new technologies if these are influenced by contacts. As an example, we demonstrate how to model an infectious disease which can be prevented by a behavior change.

  2. Applying the Principles of Specific Objectivity and of Generalizability to the Measurement of Change.

    ERIC Educational Resources Information Center

    Fischer, Gerhard H.

    1987-01-01

    A natural parameterization and formalization of the problem of measuring change in dichotomous data is developed. Mathematically-exact definitions of specific objectivity are presented, and the basic structures of the linear logistic test model and the linear logistic model with relaxed assumptions are clarified. (SLD)

  3. Metal Mixture Modeling Evaluation project: 2. Comparison of four modeling approaches

    USGS Publications Warehouse

    Farley, Kevin J.; Meyer, Joe; Balistrieri, Laurie S.; DeSchamphelaere, Karl; Iwasaki, Yuichi; Janssen, Colin; Kamo, Masashi; Lofts, Steve; Mebane, Christopher A.; Naito, Wataru; Ryan, Adam C.; Santore, Robert C.; Tipping, Edward

    2015-01-01

    As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the U.S. Geological Survey (USA), HDR⎪HydroQual, Inc. (USA), and the Centre for Ecology and Hydrology (UK) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME Workshop in Brussels, Belgium (May 2012), is provided herein. Overall, the models were found to be similar in structure (free ion activities computed by WHAM; specific or non-specific binding of metals/cations in or on the organism; specification of metal potency factors and/or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single versus multiple types of binding site on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong inter-relationships among the model parameters (log KM values, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.

  4. Academic emotions from a social-cognitive perspective: antecedents and domain specificity of students' affect in the context of Latin instruction.

    PubMed

    Goetz, Thomas; Pekrun, Reinhard; Hall, Nathan; Haag, Ludwig

    2006-06-01

    This study concentrates on two assumptions of a social-cognitive model outlining the development of academic emotions (emotions directly linked to learning, classroom instruction, and achievement), namely on their antecedents and domain-specific organization. Our sample consisted of 200 students from Grades 7 to 10. Proposed relationships concerning the antecedents of academic emotions were tested in the context of Latin language instruction. Correlational analyses substantiated our assumptions concerning the relationships between academic emotions, students' cognitions, and aspects of the social environment. The mediating mechanisms proposed in the model were also confirmed using linear structural equation modelling. Subjective control- and value-related cognitions were found to mediate the relationship between aspects of the social environment and students' emotional experience. Our results further suggest that academic emotions are largely organized along domain-specific lines, with the degree of domain specificity varying according to the emotion in question. Implications for research and practice are discussed.

  5. Variability of hemodynamic parameters using the common viscosity assumption in a computational fluid dynamics analysis of intracranial aneurysms.

    PubMed

    Suzuki, Takashi; Takao, Hiroyuki; Suzuki, Takamasa; Suzuki, Tomoaki; Masuda, Shunsuke; Dahmani, Chihebeddine; Watanabe, Mitsuyoshi; Mamori, Hiroya; Ishibashi, Toshihiro; Yamamoto, Hideki; Yamamoto, Makoto; Murayama, Yuichi

    2017-01-01

    In most simulations of intracranial aneurysm hemodynamics, blood is assumed to be a Newtonian fluid. However, it is a non-Newtonian fluid, and its viscosity profile differs among individuals. Therefore, the common viscosity assumption may not be valid for all patients. This study aims to test the suitability of the common viscosity assumption. Blood viscosity datasets were obtained from two healthy volunteers. Three simulations were performed for three different-sized aneurysms, two using measured value-based non-Newtonian models and one using a Newtonian model. The parameters proposed to predict an aneurysmal rupture obtained using the non-Newtonian models were compared with those obtained using the Newtonian model. The largest difference (25%) in the normalized wall shear stress (NWSS) was observed in the smallest aneurysm. Comparing the difference ratio to the NWSS with the Newtonian model between the two Non-Newtonian models, the difference of the ratio was 17.3%. Irrespective of the aneurysmal size, computational fluid dynamics simulations with either the common Newtonian or non-Newtonian viscosity assumption could lead to values different from those of the patient-specific viscosity model for hemodynamic parameters such as NWSS.

  6. Improving Domain-specific Machine Translation by Constraining the Language Model

    DTIC Science & Technology

    2012-07-01

    performance. To make up for the lack of parallel training data, one assumption is that more monolingual target language data should be used in building the...target language model. Prior work on domain-specific MT has focused on training target language models with monolingual 2 domain-specific data...showed that the using a large dictionary extracted from medical domain documents in a statistical MT system to generalize the training data significantly

  7. Assessing the Performance of a Computer-Based Policy Model of HIV and AIDS

    PubMed Central

    Rydzak, Chara E.; Cotich, Kara L.; Sax, Paul E.; Hsu, Heather E.; Wang, Bingxia; Losina, Elena; Freedberg, Kenneth A.; Weinstein, Milton C.; Goldie, Sue J.

    2010-01-01

    Background Model-based analyses, conducted within a decision analytic framework, provide a systematic way to combine information about the natural history of disease and effectiveness of clinical management strategies with demographic and epidemiological characteristics of the population. Among the challenges with disease-specific modeling include the need to identify influential assumptions and to assess the face validity and internal consistency of the model. Methods and Findings We describe a series of exercises involved in adapting a computer-based simulation model of HIV disease to the Women's Interagency HIV Study (WIHS) cohort and assess model performance as we re-parameterized the model to address policy questions in the U.S. relevant to HIV-infected women using data from the WIHS. Empiric calibration targets included 24-month survival curves stratified by treatment status and CD4 cell count. The most influential assumptions in untreated women included chronic HIV-associated mortality following an opportunistic infection, and in treated women, the ‘clinical effectiveness’ of HAART and the ability of HAART to prevent HIV complications independent of virologic suppression. Good-fitting parameter sets required reductions in the clinical effectiveness of 1st and 2nd line HAART and improvements in 3rd and 4th line regimens. Projected rates of treatment regimen switching using the calibrated cohort-specific model closely approximated independent analyses published using data from the WIHS. Conclusions The model demonstrated good internal consistency and face validity, and supported cohort heterogeneities that have been reported in the literature. Iterative assessment of model performance can provide information about the relative influence of uncertain assumptions and provide insight into heterogeneities within and between cohorts. Description of calibration exercises can enhance the transparency of disease-specific models. PMID:20844741

  8. Assessing the performance of a computer-based policy model of HIV and AIDS.

    PubMed

    Rydzak, Chara E; Cotich, Kara L; Sax, Paul E; Hsu, Heather E; Wang, Bingxia; Losina, Elena; Freedberg, Kenneth A; Weinstein, Milton C; Goldie, Sue J

    2010-09-09

    Model-based analyses, conducted within a decision analytic framework, provide a systematic way to combine information about the natural history of disease and effectiveness of clinical management strategies with demographic and epidemiological characteristics of the population. Among the challenges with disease-specific modeling include the need to identify influential assumptions and to assess the face validity and internal consistency of the model. We describe a series of exercises involved in adapting a computer-based simulation model of HIV disease to the Women's Interagency HIV Study (WIHS) cohort and assess model performance as we re-parameterized the model to address policy questions in the U.S. relevant to HIV-infected women using data from the WIHS. Empiric calibration targets included 24-month survival curves stratified by treatment status and CD4 cell count. The most influential assumptions in untreated women included chronic HIV-associated mortality following an opportunistic infection, and in treated women, the 'clinical effectiveness' of HAART and the ability of HAART to prevent HIV complications independent of virologic suppression. Good-fitting parameter sets required reductions in the clinical effectiveness of 1st and 2nd line HAART and improvements in 3rd and 4th line regimens. Projected rates of treatment regimen switching using the calibrated cohort-specific model closely approximated independent analyses published using data from the WIHS. The model demonstrated good internal consistency and face validity, and supported cohort heterogeneities that have been reported in the literature. Iterative assessment of model performance can provide information about the relative influence of uncertain assumptions and provide insight into heterogeneities within and between cohorts. Description of calibration exercises can enhance the transparency of disease-specific models.

  9. Design verification of SIFT

    NASA Technical Reports Server (NTRS)

    Moser, Louise; Melliar-Smith, Michael; Schwartz, Richard

    1987-01-01

    A SIFT reliable aircraft control computer system, designed to meet the ultrahigh reliability required for safety critical flight control applications by use of processor replications and voting, was constructed for SRI, and delivered to NASA Langley for evaluation in the AIRLAB. To increase confidence in the reliability projections for SIFT, produced by a Markov reliability model, SRI constructed a formal specification, defining the meaning of reliability in the context of flight control. A further series of specifications defined, in increasing detail, the design of SIFT down to pre- and post-conditions on Pascal code procedures. Mechanically checked mathematical proofs were constructed to demonstrate that the more detailed design specifications for SIFT do indeed imply the formal reliability requirement. An additional specification defined some of the assumptions made about SIFT by the Markov model, and further proofs were constructed to show that these assumptions, as expressed by that specification, did indeed follow from the more detailed design specifications for SIFT. This report provides an outline of the methodology used for this hierarchical specification and proof, and describes the various specifications and proofs performed.

  10. Metal mixture modeling evaluation project: 2. Comparison of four modeling approaches.

    PubMed

    Farley, Kevin J; Meyer, Joseph S; Balistrieri, Laurie S; De Schamphelaere, Karel A C; Iwasaki, Yuichi; Janssen, Colin R; Kamo, Masashi; Lofts, Stephen; Mebane, Christopher A; Naito, Wataru; Ryan, Adam C; Santore, Robert C; Tipping, Edward

    2015-04-01

    As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the US Geological Survey (USA), HDR|HydroQual (USA), and the Centre for Ecology and Hydrology (United Kingdom) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME workshop in Brussels, Belgium (May 2012), is provided in the present study. Overall, the models were found to be similar in structure (free ion activities computed by the Windermere humic aqueous model [WHAM]; specific or nonspecific binding of metals/cations in or on the organism; specification of metal potency factors or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single vs multiple types of binding sites on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong interrelationships among the model parameters (binding constants, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed. © 2014 SETAC.

  11. Life Support Baseline Values and Assumptions Document

    NASA Technical Reports Server (NTRS)

    Anderson, Molly S.; Ewert, Michael K.; Keener, John F.

    2018-01-01

    The Baseline Values and Assumptions Document (BVAD) provides analysts, modelers, and other life support researchers with a common set of values and assumptions which can be used as a baseline in their studies. This baseline, in turn, provides a common point of origin from which many studies in the community may depart, making research results easier to compare and providing researchers with reasonable values to assume for areas outside their experience. This document identifies many specific physical quantities that define life support systems, serving as a general reference for spacecraft life support system technology developers.

  12. CONTROL FUNCTION ASSISTED IPW ESTIMATION WITH A SECONDARY OUTCOME IN CASE-CONTROL STUDIES.

    PubMed

    Sofer, Tamar; Cornelis, Marilyn C; Kraft, Peter; Tchetgen Tchetgen, Eric J

    2017-04-01

    Case-control studies are designed towards studying associations between risk factors and a single, primary outcome. Information about additional, secondary outcomes is also collected, but association studies targeting such secondary outcomes should account for the case-control sampling scheme, or otherwise results may be biased. Often, one uses inverse probability weighted (IPW) estimators to estimate population effects in such studies. IPW estimators are robust, as they only require correct specification of the mean regression model of the secondary outcome on covariates, and knowledge of the disease prevalence. However, IPW estimators are inefficient relative to estimators that make additional assumptions about the data generating mechanism. We propose a class of estimators for the effect of risk factors on a secondary outcome in case-control studies that combine IPW with an additional modeling assumption: specification of the disease outcome probability model. We incorporate this model via a mean zero control function. We derive the class of all regular and asymptotically linear estimators corresponding to our modeling assumption, when the secondary outcome mean is modeled using either the identity or the log link. We find the efficient estimator in our class of estimators and show that it reduces to standard IPW when the model for the primary disease outcome is unrestricted, and is more efficient than standard IPW when the model is either parametric or semiparametric.

  13. Great Expectations: Is there Evidence for Predictive Coding in Auditory Cortex?

    PubMed

    Heilbron, Micha; Chait, Maria

    2017-08-04

    Predictive coding is possibly one of the most influential, comprehensive, and controversial theories of neural function. While proponents praise its explanatory potential, critics object that key tenets of the theory are untested or even untestable. The present article critically examines existing evidence for predictive coding in the auditory modality. Specifically, we identify five key assumptions of the theory and evaluate each in the light of animal, human and modeling studies of auditory pattern processing. For the first two assumptions - that neural responses are shaped by expectations and that these expectations are hierarchically organized - animal and human studies provide compelling evidence. The anticipatory, predictive nature of these expectations also enjoys empirical support, especially from studies on unexpected stimulus omission. However, for the existence of separate error and prediction neurons, a key assumption of the theory, evidence is lacking. More work exists on the proposed oscillatory signatures of predictive coding, and on the relation between attention and precision. However, results on these latter two assumptions are mixed or contradictory. Looking to the future, more collaboration between human and animal studies, aided by model-based analyses will be needed to test specific assumptions and implementations of predictive coding - and, as such, help determine whether this popular grand theory can fulfill its expectations. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  14. Beware the black box: investigating the sensitivity of FEA simulations to modelling factors in comparative biomechanics.

    PubMed

    Walmsley, Christopher W; McCurry, Matthew R; Clausen, Phillip D; McHenry, Colin R

    2013-01-01

    Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be 'reasonable' are often assumed to have little influence on the results and their interpretation. HERE WE REPORT AN EXTENSIVE SENSITIVITY ANALYSIS WHERE HIGH RESOLUTION FINITE ELEMENT (FE) MODELS OF MANDIBLES FROM SEVEN SPECIES OF CROCODILE WERE ANALYSED UNDER LOADS TYPICAL FOR COMPARATIVE ANALYSIS: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results.

  15. Beware the black box: investigating the sensitivity of FEA simulations to modelling factors in comparative biomechanics

    PubMed Central

    McCurry, Matthew R.; Clausen, Phillip D.; McHenry, Colin R.

    2013-01-01

    Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be ‘reasonable’ are often assumed to have little influence on the results and their interpretation. Here we report an extensive sensitivity analysis where high resolution finite element (FE) models of mandibles from seven species of crocodile were analysed under loads typical for comparative analysis: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results. PMID:24255817

  16. Privacy Protection through pseudonymisation in eHealth.

    PubMed

    De Meyer, F; De Moor, G; Reed-Fourquet, L

    2008-01-01

    The ISO TC215 WG4 pseudonymisation task group has produced in 2008 a first version of a technical specification for the application of pseudonymisation in Healthcare Informatics 0. This paper investigates the principles set out in the technical specification as well as its implications in eHealth. The technical specification starts out with a conceptual model and evolves from a theoretical model to a real life model by adding assumptions on the observability of personal data.

  17. Simulation of Biomass Accumulation Pattern in Vapor-Phase Biofilters

    PubMed Central

    Xi, Jin-Ying; Hu, Hong-Ying; Zhang, Xian

    2012-01-01

    Abstract Existence of inert biomass and its impact on biomass accumulation patterns and biofilter performance were investigated. Four biofilters were set up in parallel to treat gaseous toluene. Each biofilter operated under different inlet toluene loadings for 100 days. Two microbial growth models, one with an inert biomass assumption and the other without, were established and compared. Results from the model with the inert biomass assumption showed better agreement with the experimental data than those based on the model without the inert biomass assumption thus verifying that inert biomass accumulation cannot be ignored in the long-term operation of biofilters. According to the model with an inert biomass assumption, the ratio of active biomass to total biomass will decrease and the inert biomass will become dominant in total biomass after a period of time. Filter bed structure simulation results showed that the void fraction is more sensitive to biomass accumulation than the specific surface area. The final void fraction of the biofilters with the highest inlet toluene loading is only 67% of its initial level while the final specific surface area is 82%. Identification and quantification of inert biomass will give a better understanding of biomass accumulation in biofilters and will result in a more exact simulation of biomass change during long-term operations. Results also indicate that an ideal biomass control technique should be able to remove most inert biomass while simultaneously preserving as much active biomass as possible. PMID:22693411

  18. Finite element model predictions of static deformation from dislocation sources in a subduction zone: Sensitivities to homogeneous, isotropic, Poisson-solid, and half-space assumptions

    USGS Publications Warehouse

    Masterlark, Timothy

    2003-01-01

    Dislocation models can simulate static deformation caused by slip along a fault. These models usually take the form of a dislocation embedded in a homogeneous, isotropic, Poisson-solid half-space (HIPSHS). However, the widely accepted HIPSHS assumptions poorly approximate subduction zone systems of converging oceanic and continental crust. This study uses three-dimensional finite element models (FEMs) that allow for any combination (including none) of the HIPSHS assumptions to compute synthetic Green's functions for displacement. Using the 1995 Mw = 8.0 Jalisco-Colima, Mexico, subduction zone earthquake and associated measurements from a nearby GPS array as an example, FEM-generated synthetic Green's functions are combined with standard linear inverse methods to estimate dislocation distributions along the subduction interface. Loading a forward HIPSHS model with dislocation distributions, estimated from FEMs that sequentially relax the HIPSHS assumptions, yields the sensitivity of predicted displacements to each of the HIPSHS assumptions. For the subduction zone models tested and the specific field situation considered, sensitivities to the individual Poisson-solid, isotropy, and homogeneity assumptions can be substantially greater than GPS. measurement uncertainties. Forward modeling quantifies stress coupling between the Mw = 8.0 earthquake and a nearby Mw = 6.3 earthquake that occurred 63 days later. Coulomb stress changes predicted from static HIPSHS models cannot account for the 63-day lag time between events. Alternatively, an FEM that includes a poroelastic oceanic crust, which allows for postseismic pore fluid pressure recovery, can account for the lag time. The pore fluid pressure recovery rate puts an upper limit of 10-17 m2 on the bulk permeability of the oceanic crust. Copyright 2003 by the American Geophysical Union.

  19. The Communication Model Perspective of Oral Interpretation.

    ERIC Educational Resources Information Center

    Peterson, Eric E.

    Communication models suggest that oral interpretation is a communicative process, that this process may be represented by specification of implicit and explicit content and structure, and that the models themselves are useful. This paper examines these assumptions through a comparative analysis of communication models employed by oral…

  20. Interactive Rapid Dose Assessment Model (IRDAM): reactor-accident assessment methods. Vol. 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poeton, R.W.; Moeller, M.P.; Laughlin, G.J.

    1983-05-01

    As part of the continuing emphasis on emergency preparedness, the US Nuclear Regulatory Commission (NRC) sponsored the development of a rapid dose assessment system by Pacific Northwest Laboratory (PNL). This system, the Interactive Rapid Dose Assessment Model (IRDAM) is a micro-computer based program for rapidly assessing the radiological impact of accidents at nuclear power plants. This document describes the technical bases for IRDAM including methods, models and assumptions used in calculations. IRDAM calculates whole body (5-cm depth) and infant thyroid doses at six fixed downwind distances between 500 and 20,000 meters. Radionuclides considered primarily consist of noble gases and radioiodines.more » In order to provide a rapid assessment capability consistent with the capacity of the Osborne-1 computer, certain simplifying approximations and assumptions are made. These are described, along with default values (assumptions used in the absence of specific input) in the text of this document. Two companion volumes to this one provide additional information on IRDAM. The user's Guide (NUREG/CR-3012, Volume 1) describes the setup and operation of equipment necessary to run IRDAM. Scenarios for Comparing Dose Assessment Models (NUREG/CR-3012, Volume 3) provides the results of calculations made by IRDAM and other models for specific accident scenarios.« less

  1. Assumptions, conjectures, and other miracles: The application of evaluative thinking to theory of change models in community development.

    PubMed

    Archibald, Thomas; Sharrock, Guy; Buckley, Jane; Cook, Natalie

    2016-12-01

    Unexamined and unjustified assumptions are the Achilles' heel of development programs. In this paper, we describe an evaluation capacity building (ECB) approach designed to help community development practitioners work more effectively with assumptions through the intentional infusion of evaluative thinking (ET) into the program planning, monitoring, and evaluation process. We focus specifically on one component of our ET promotion approach involving the creation and analysis of theory of change (ToC) models. We describe our recent efforts to pilot this ET ECB approach with Catholic Relief Services (CRS) in Ethiopia and Zambia. The use of ToC models, plus the addition of ET, is a way to encourage individual and organizational learning and adaptive management that supports more reflective and responsive programming. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Sensitivity analysis of pars-tensa young's modulus estimation using inverse finite-element modeling

    NASA Astrophysics Data System (ADS)

    Rohani, S. Alireza; Elfarnawany, Mai; Agrawal, Sumit K.; Ladak, Hanif M.

    2018-05-01

    Accurate estimates of the pars-tensa (PT) Young's modulus (EPT) are required in finite-element (FE) modeling studies of the middle ear. Previously, we introduced an in-situ EPT estimation technique by optimizing a sample-specific FE model to match experimental eardrum pressurization data. This optimization process requires choosing some modeling assumptions such as PT thickness and boundary conditions. These assumptions are reported with a wide range of variation in the literature, hence affecting the reliability of the models. In addition, the sensitivity of the estimated EPT to FE modeling assumptions has not been studied. Therefore, the objective of this study is to identify the most influential modeling assumption on EPT estimates. The middle-ear cavity extracted from a cadaveric temporal bone was pressurized to 500 Pa. The deformed shape of the eardrum after pressurization was measured using a Fourier transform profilometer (FTP). A base-line FE model of the unpressurized middle ear was created. The EPT was estimated using golden section optimization method, which minimizes the cost function comparing the deformed FE model shape to the measured shape after pressurization. The effect of varying the modeling assumptions on EPT estimates were investigated. This included the change in PT thickness, pars flaccida Young's modulus and possible FTP measurement error. The most influential parameter on EPT estimation was PT thickness and the least influential parameter was pars flaccida Young's modulus. The results of this study provide insight into how different parameters affect the results of EPT optimization and which parameters' uncertainties require further investigation to develop robust estimation techniques.

  3. State relations for a two-phase mixture of reacting explosives and applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kubota, Shiro; Saburi, Tei; Ogata, Yuji

    2007-10-15

    To assess the assumptions behind the two phase mixture rule for reacting explosives, the shock-to-detonation transition process was calculated for high explosives using a finite difference method. An ignition and growth model and the Jones-Wilkins-Lee (JWL) equations of state were employed. The simple mixture rule assumes that the reacting explosive is a simple mixture of the reactant and product components. Four different assumptions, such as that of thermal equilibrium and isotropy, were adopted to calculate the pressure. The main purpose of this paper is to present the answer to the question of why the numerical results of shock-initiation are insensitivemore » to the assumptions adopted. The equations of state for reactants and products were assessed by considering plots of the specific internal energy E and specific volume V. If the slopes of the constant-pressure lines for both components in the E-V plane are almost the same, it is demonstrated that the numerical results are insensitive to the assumptions adopted. We have found that the relation for the specific volumes of the two components can be approximately expressed by a single curve of the specific volume of the reactant vs that of the products. We discuss this relationship in terms of the results of the numerical simulation. (author)« less

  4. Estimating the Global Prevalence of Inadequate Zinc Intake from National Food Balance Sheets: Effects of Methodological Assumptions

    PubMed Central

    Wessells, K. Ryan; Singh, Gitanjali M.; Brown, Kenneth H.

    2012-01-01

    Background The prevalence of inadequate zinc intake in a population can be estimated by comparing the zinc content of the food supply with the population’s theoretical requirement for zinc. However, assumptions regarding the nutrient composition of foods, zinc requirements, and zinc absorption may affect prevalence estimates. These analyses were conducted to: (1) evaluate the effect of varying methodological assumptions on country-specific estimates of the prevalence of dietary zinc inadequacy and (2) generate a model considered to provide the best estimates. Methodology and Principal Findings National food balance data were obtained from the Food and Agriculture Organization of the United Nations. Zinc and phytate contents of these foods were estimated from three nutrient composition databases. Zinc absorption was predicted using a mathematical model (Miller equation). Theoretical mean daily per capita physiological and dietary requirements for zinc were calculated using recommendations from the Food and Nutrition Board of the Institute of Medicine and the International Zinc Nutrition Consultative Group. The estimated global prevalence of inadequate zinc intake varied between 12–66%, depending on which methodological assumptions were applied. However, country-specific rank order of the estimated prevalence of inadequate intake was conserved across all models (r = 0.57–0.99, P<0.01). A “best-estimate” model, comprised of zinc and phytate data from a composite nutrient database and IZiNCG physiological requirements for absorbed zinc, estimated the global prevalence of inadequate zinc intake to be 17.3%. Conclusions and Significance Given the multiple sources of uncertainty in this method, caution must be taken in the interpretation of the estimated prevalence figures. However, the results of all models indicate that inadequate zinc intake may be fairly common globally. Inferences regarding the relative likelihood of zinc deficiency as a public health problem in different countries can be drawn based on the country-specific rank order of estimated prevalence of inadequate zinc intake. PMID:23209781

  5. Using foresight methods to anticipate future threats: the case of disease management.

    PubMed

    Ma, Sai; Seid, Michael

    2006-01-01

    We describe a unique foresight framework for health care managers to use in longer-term planning. This framework uses scenario-building to envision plausible alternate futures of the U.S. health care system and links those broad futures to business-model-specific "load-bearing" assumptions. Because the framework we describe simultaneously addresses very broad and very specific issues, it can be easily applied to a broad range of health care issues by using the broad framework and business-specific assumptions for the particular case at hand. We illustrate this method using the case of disease management, pointing out that although the industry continues to grow rapidly, its future also contains great uncertainties.

  6. The sensitivity of the ESA DELTA model

    NASA Astrophysics Data System (ADS)

    Martin, C.; Walker, R.; Klinkrad, H.

    Long-term debris environment models play a vital role in furthering our understanding of the future debris environment, and in aiding the determination of a strategy to preserve the Earth orbital environment for future use. By their very nature these models have to make certain assumptions to enable informative future projections to be made. Examples of these assumptions include the projection of future traffic, including launch and explosion rates, and the methodology used to simulate break-up events. To ensure a sound basis for future projections, and consequently for assessing the effectiveness of various mitigation measures, it is essential that the sensitivity of these models to variations in key assumptions is examined. The DELTA (Debris Environment Long Term Analysis) model, developed by QinetiQ for the European Space Agency, allows the future projection of the debris environment throughout Earth orbit. Extensive analyses with this model have been performed under the auspices of the ESA Space Debris Mitigation Handbook and following the recent upgrade of the model to DELTA 3.0. This paper draws on these analyses to present the sensitivity of the DELTA model to changes in key model parameters and assumptions. Specifically the paper will address the variation in future traffic rates, including the deployment of satellite constellations, and the variation in the break-up model and criteria used to simulate future explosion and collision events.

  7. On the Empirical Importance of the Conditional Skewness Assumption in Modelling the Relationship between Risk and Return

    NASA Astrophysics Data System (ADS)

    Pipień, M.

    2008-09-01

    We present the results of an application of Bayesian inference in testing the relation between risk and return on the financial instruments. On the basis of the Intertemporal Capital Asset Pricing Model, proposed by Merton we built a general sampling distribution suitable in analysing this relationship. The most important feature of our assumptions is that the skewness of the conditional distribution of returns is used as an alternative source of relation between risk and return. This general specification relates to Skewed Generalized Autoregressive Conditionally Heteroscedastic-in-Mean model. In order to make conditional distribution of financial returns skewed we considered the unified approach based on the inverse probability integral transformation. In particular, we applied hidden truncation mechanism, inverse scale factors, order statistics concept, Beta and Bernstein distribution transformations and also a constructive method. Based on the daily excess returns on the Warsaw Stock Exchange Index we checked the empirical importance of the conditional skewness assumption on the relation between risk and return on the Warsaw Stock Market. We present posterior probabilities of all competing specifications as well as the posterior analysis of the positive sign of the tested relationship.

  8. Bayesian learning and the psychology of rule induction

    PubMed Central

    Endress, Ansgar D.

    2014-01-01

    In recent years, Bayesian learning models have been applied to an increasing variety of domains. While such models have been criticized on theoretical grounds, the underlying assumptions and predictions are rarely made concrete and tested experimentally. Here, I use Frank and Tenenbaum's (2011) Bayesian model of rule-learning as a case study to spell out the underlying assumptions, and to confront them with the empirical results Frank and Tenenbaum (2011) propose to simulate, as well as with novel experiments. While rule-learning is arguably well suited to rational Bayesian approaches, I show that their models are neither psychologically plausible nor ideal observer models. Further, I show that their central assumption is unfounded: humans do not always preferentially learn more specific rules, but, at least in some situations, those rules that happen to be more salient. Even when granting the unsupported assumptions, I show that all of the experiments modeled by Frank and Tenenbaum (2011) either contradict their models, or have a large number of more plausible interpretations. I provide an alternative account of the experimental data based on simple psychological mechanisms, and show that this account both describes the data better, and is easier to falsify. I conclude that, despite the recent surge in Bayesian models of cognitive phenomena, psychological phenomena are best understood by developing and testing psychological theories rather than models that can be fit to virtually any data. PMID:23454791

  9. A Motivation Contract Model of Employee Appraisal.

    ERIC Educational Resources Information Center

    Glenn, Robert B.

    The purpose of this paper is to develop a process model for identification and assessment of employee job performance, through motivation contracting. The model integrated various components of expectancy theories of motivation and performance contracting and is based on humanistic assumptions about the nature of people. More specifically, the…

  10. Estimators for longitudinal latent exposure models: examining measurement model assumptions.

    PubMed

    Sánchez, Brisa N; Kim, Sehee; Sammel, Mary D

    2017-06-15

    Latent variable (LV) models are increasingly being used in environmental epidemiology as a way to summarize multiple environmental exposures and thus minimize statistical concerns that arise in multiple regression. LV models may be especially useful when multivariate exposures are collected repeatedly over time. LV models can accommodate a variety of assumptions but, at the same time, present the user with many choices for model specification particularly in the case of exposure data collected repeatedly over time. For instance, the user could assume conditional independence of observed exposure biomarkers given the latent exposure and, in the case of longitudinal latent exposure variables, time invariance of the measurement model. Choosing which assumptions to relax is not always straightforward. We were motivated by a study of prenatal lead exposure and mental development, where assumptions of the measurement model for the time-changing longitudinal exposure have appreciable impact on (maximum-likelihood) inferences about the health effects of lead exposure. Although we were not particularly interested in characterizing the change of the LV itself, imposing a longitudinal LV structure on the repeated multivariate exposure measures could result in high efficiency gains for the exposure-disease association. We examine the biases of maximum likelihood estimators when assumptions about the measurement model for the longitudinal latent exposure variable are violated. We adapt existing instrumental variable estimators to the case of longitudinal exposures and propose them as an alternative to estimate the health effects of a time-changing latent predictor. We show that instrumental variable estimators remain unbiased for a wide range of data generating models and have advantages in terms of mean squared error. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  11. Developing a data governance model in health care.

    PubMed

    Reeves, Mary G; Bowen, Rita

    2013-02-01

    When building a data governance model, finance leaders should: Establish a leadership team and define the program's scope. Calculate the return using the confidence in data-dependent assumptions metric. Identify specific areas of deficiency and create a budget to address these areas.

  12. Strong Inference in Mathematical Modeling: A Method for Robust Science in the Twenty-First Century.

    PubMed

    Ganusov, Vitaly V

    2016-01-01

    While there are many opinions on what mathematical modeling in biology is, in essence, modeling is a mathematical tool, like a microscope, which allows consequences to logically follow from a set of assumptions. Only when this tool is applied appropriately, as microscope is used to look at small items, it may allow to understand importance of specific mechanisms/assumptions in biological processes. Mathematical modeling can be less useful or even misleading if used inappropriately, for example, when a microscope is used to study stars. According to some philosophers (Oreskes et al., 1994), the best use of mathematical models is not when a model is used to confirm a hypothesis but rather when a model shows inconsistency of the model (defined by a specific set of assumptions) and data. Following the principle of strong inference for experimental sciences proposed by Platt (1964), I suggest "strong inference in mathematical modeling" as an effective and robust way of using mathematical modeling to understand mechanisms driving dynamics of biological systems. The major steps of strong inference in mathematical modeling are (1) to develop multiple alternative models for the phenomenon in question; (2) to compare the models with available experimental data and to determine which of the models are not consistent with the data; (3) to determine reasons why rejected models failed to explain the data, and (4) to suggest experiments which would allow to discriminate between remaining alternative models. The use of strong inference is likely to provide better robustness of predictions of mathematical models and it should be strongly encouraged in mathematical modeling-based publications in the Twenty-First century.

  13. Strong Inference in Mathematical Modeling: A Method for Robust Science in the Twenty-First Century

    PubMed Central

    Ganusov, Vitaly V.

    2016-01-01

    While there are many opinions on what mathematical modeling in biology is, in essence, modeling is a mathematical tool, like a microscope, which allows consequences to logically follow from a set of assumptions. Only when this tool is applied appropriately, as microscope is used to look at small items, it may allow to understand importance of specific mechanisms/assumptions in biological processes. Mathematical modeling can be less useful or even misleading if used inappropriately, for example, when a microscope is used to study stars. According to some philosophers (Oreskes et al., 1994), the best use of mathematical models is not when a model is used to confirm a hypothesis but rather when a model shows inconsistency of the model (defined by a specific set of assumptions) and data. Following the principle of strong inference for experimental sciences proposed by Platt (1964), I suggest “strong inference in mathematical modeling” as an effective and robust way of using mathematical modeling to understand mechanisms driving dynamics of biological systems. The major steps of strong inference in mathematical modeling are (1) to develop multiple alternative models for the phenomenon in question; (2) to compare the models with available experimental data and to determine which of the models are not consistent with the data; (3) to determine reasons why rejected models failed to explain the data, and (4) to suggest experiments which would allow to discriminate between remaining alternative models. The use of strong inference is likely to provide better robustness of predictions of mathematical models and it should be strongly encouraged in mathematical modeling-based publications in the Twenty-First century. PMID:27499750

  14. Particle precipitation: How the spectrum fit impacts atmospheric chemistry

    NASA Astrophysics Data System (ADS)

    Wissing, J. M.; Nieder, H.; Yakovchouk, O. S.; Sinnhuber, M.

    2016-11-01

    Particle precipitation causes atmospheric ionization. Modeled ionization rates are widely used in atmospheric chemistry/climate simulations of the upper atmosphere. As ionization rates are based on particle measurements some assumptions concerning the energy spectrum are required. While detectors measure particles binned into certain energy ranges only, the calculation of a ionization profile needs a fit for the whole energy spectrum. Therefore the following assumptions are needed: (a) fit function (e.g. power-law or Maxwellian), (b) energy range, (c) amount of segments in the spectral fit, (d) fixed or variable positions of intersections between these segments. The aim of this paper is to quantify the impact of different assumptions on ionization rates as well as their consequences for atmospheric chemistry modeling. As the assumptions about the particle spectrum are independent from the ionization model itself the results of this paper are not restricted to a single ionization model, even though the Atmospheric Ionization Module OSnabrück (AIMOS, Wissing and Kallenrode, 2009) is used here. We include protons only as this allows us to trace changes in the chemistry model directly back to the different assumptions without the need to interpret superposed ionization profiles. However, since every particle species requires a particle spectrum fit with the mentioned assumptions the results are generally applicable to all precipitating particles. The reader may argue that the selection of assumptions of the particle fit is of minor interest, but we would like to emphasize on this topic as it is a major, if not the main, source of discrepancies between different ionization models (and reality). Depending on the assumptions single ionization profiles may vary by a factor of 5, long-term calculations may show systematic over- or underestimation in specific altitudes and even for ideal setups the definition of the energy-range involves an intrinsic 25% uncertainty for the ionization rates. The effects on atmospheric chemistry (HOx, NOx and Ozone) have been calculated by 3dCTM, showing that the spectrum fit is responsible for a 8% variation in Ozone between setups, and even up to 50% for extreme setups.

  15. Rasch Mixture Models for DIF Detection: A Comparison of Old and New Score Specifications

    ERIC Educational Resources Information Center

    Frick, Hannah; Strobl, Carolin; Zeileis, Achim

    2015-01-01

    Rasch mixture models can be a useful tool when checking the assumption of measurement invariance for a single Rasch model. They provide advantages compared to manifest differential item functioning (DIF) tests when the DIF groups are only weakly correlated with the manifest covariates available. Unlike in single Rasch models, estimation of Rasch…

  16. Developmental models for estimating ecological responses to environmental variability: structural, parametric, and experimental issues.

    PubMed

    Moore, Julia L; Remais, Justin V

    2014-03-01

    Developmental models that account for the metabolic effect of temperature variability on poikilotherms, such as degree-day models, have been widely used to study organism emergence, range and development, particularly in agricultural and vector-borne disease contexts. Though simple and easy to use, structural and parametric issues can influence the outputs of such models, often substantially. Because the underlying assumptions and limitations of these models have rarely been considered, this paper reviews the structural, parametric, and experimental issues that arise when using degree-day models, including the implications of particular structural or parametric choices, as well as assumptions that underlie commonly used models. Linear and non-linear developmental functions are compared, as are common methods used to incorporate temperature thresholds and calculate daily degree-days. Substantial differences in predicted emergence time arose when using linear versus non-linear developmental functions to model the emergence time in a model organism. The optimal method for calculating degree-days depends upon where key temperature threshold parameters fall relative to the daily minimum and maximum temperatures, as well as the shape of the daily temperature curve. No method is shown to be universally superior, though one commonly used method, the daily average method, consistently provides accurate results. The sensitivity of model projections to these methodological issues highlights the need to make structural and parametric selections based on a careful consideration of the specific biological response of the organism under study, and the specific temperature conditions of the geographic regions of interest. When degree-day model limitations are considered and model assumptions met, the models can be a powerful tool for studying temperature-dependent development.

  17. Bayesian non-parametric inference for stochastic epidemic models using Gaussian Processes.

    PubMed

    Xu, Xiaoguang; Kypraios, Theodore; O'Neill, Philip D

    2016-10-01

    This paper considers novel Bayesian non-parametric methods for stochastic epidemic models. Many standard modeling and data analysis methods use underlying assumptions (e.g. concerning the rate at which new cases of disease will occur) which are rarely challenged or tested in practice. To relax these assumptions, we develop a Bayesian non-parametric approach using Gaussian Processes, specifically to estimate the infection process. The methods are illustrated with both simulated and real data sets, the former illustrating that the methods can recover the true infection process quite well in practice, and the latter illustrating that the methods can be successfully applied in different settings. © The Author 2016. Published by Oxford University Press.

  18. Plant uptake of elements in soil and pore water: field observations versus model assumptions.

    PubMed

    Raguž, Veronika; Jarsjö, Jerker; Grolander, Sara; Lindborg, Regina; Avila, Rodolfo

    2013-09-15

    Contaminant concentrations in various edible plant parts transfer hazardous substances from polluted areas to animals and humans. Thus, the accurate prediction of plant uptake of elements is of significant importance. The processes involved contain many interacting factors and are, as such, complex. In contrast, the most common way to currently quantify element transfer from soils into plants is relatively simple, using an empirical soil-to-plant transfer factor (TF). This practice is based on theoretical assumptions that have been previously shown to not generally be valid. Using field data on concentrations of 61 basic elements in spring barley, soil and pore water at four agricultural sites in mid-eastern Sweden, we quantify element-specific TFs. Our aim is to investigate to which extent observed element-specific uptake is consistent with TF model assumptions and to which extent TF's can be used to predict observed differences in concentrations between different plant parts (root, stem and ear). Results show that for most elements, plant-ear concentrations are not linearly related to bulk soil concentrations, which is congruent with previous studies. This behaviour violates a basic TF model assumption of linearity. However, substantially better linear correlations are found when weighted average element concentrations in whole plants are used for TF estimation. The highest number of linearly-behaving elements was found when relating average plant concentrations to soil pore-water concentrations. In contrast to other elements, essential elements (micronutrients and macronutrients) exhibited relatively small differences in concentration between different plant parts. Generally, the TF model was shown to work reasonably well for micronutrients, whereas it did not for macronutrients. The results also suggest that plant uptake of elements from sources other than the soil compartment (e.g. from air) may be non-negligible. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Modeling approaches in avian conservation and the role of field biologists

    USGS Publications Warehouse

    Beissinger, Steven R.; Walters, J.R.; Catanzaro, D.G.; Smith, Kimberly G.; Dunning, J.B.; Haig, Susan M.; Noon, Barry; Stith, Bradley M.

    2006-01-01

    This review grew out of our realization that models play an increasingly important role in conservation but are rarely used in the research of most avian biologists. Modelers are creating models that are more complex and mechanistic and that can incorporate more of the knowledge acquired by field biologists. Such models require field biologists to provide more specific information, larger sample sizes, and sometimes new kinds of data, such as habitat-specific demography and dispersal information. Field biologists need to support model development by testing key model assumptions and validating models. The best conservation decisions will occur where cooperative interaction enables field biologists, modelers, statisticians, and managers to contribute effectively. We begin by discussing the general form of ecological models—heuristic or mechanistic, "scientific" or statistical—and then highlight the structure, strengths, weaknesses, and applications of six types of models commonly used in avian conservation: (1) deterministic single-population matrix models, (2) stochastic population viability analysis (PVA) models for single populations, (3) metapopulation models, (4) spatially explicit models, (5) genetic models, and (6) species distribution models. We end by considering their unique attributes, determining whether the assumptions that underlie the structure are valid, and testing the ability of the model to predict the future correctly.

  20. Taxometric Analyses of Specific Language Impairment in 3- And 4-Year-Old Children.

    ERIC Educational Resources Information Center

    Dollaghan, Christine A.

    2004-01-01

    Specific language impairment (SLI), like many diagnostic labels for complex behavioral conditions, is often assumed to define a category of children who differ not only in degree but also in kind from children developing language normally. Although this assumption has important implications for theoretical models and clinical approaches, its…

  1. From behavioural analyses to models of collective motion in fish schools

    PubMed Central

    Lopez, Ugo; Gautrais, Jacques; Couzin, Iain D.; Theraulaz, Guy

    2012-01-01

    Fish schooling is a phenomenon of long-lasting interest in ethology and ecology, widely spread across taxa and ecological contexts, and has attracted much interest from statistical physics and theoretical biology as a case of self-organized behaviour. One topic of intense interest is the search of specific behavioural mechanisms at stake at the individual level and from which the school properties emerges. This is fundamental for understanding how selective pressure acting at the individual level promotes adaptive properties of schools and in trying to disambiguate functional properties from non-adaptive epiphenomena. Decades of studies on collective motion by means of individual-based modelling have allowed a qualitative understanding of the self-organization processes leading to collective properties at school level, and provided an insight into the behavioural mechanisms that result in coordinated motion. Here, we emphasize a set of paradigmatic modelling assumptions whose validity remains unclear, both from a behavioural point of view and in terms of quantitative agreement between model outcome and empirical data. We advocate for a specific and biologically oriented re-examination of these assumptions through experimental-based behavioural analysis and modelling. PMID:24312723

  2. Model-based Utility Functions

    NASA Astrophysics Data System (ADS)

    Hibbard, Bill

    2012-05-01

    Orseau and Ring, as well as Dewey, have recently described problems, including self-delusion, with the behavior of agents using various definitions of utility functions. An agent's utility function is defined in terms of the agent's history of interactions with its environment. This paper argues, via two examples, that the behavior problems can be avoided by formulating the utility function in two steps: 1) inferring a model of the environment from interactions, and 2) computing utility as a function of the environment model. Basing a utility function on a model that the agent must learn implies that the utility function must initially be expressed in terms of specifications to be matched to structures in the learned model. These specifications constitute prior assumptions about the environment so this approach will not work with arbitrary environments. But the approach should work for agents designed by humans to act in the physical world. The paper also addresses the issue of self-modifying agents and shows that if provided with the possibility to modify their utility functions agents will not choose to do so, under some usual assumptions.

  3. Variable thickness transient ground-water flow model. Volume 1. Formulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reisenauer, A.E.

    1979-12-01

    Mathematical formulation for the variable thickness transient (VTT) model of an aquifer system is presented. The basic assumptions are described. Specific data requirements for the physical parameters are discussed. The boundary definitions and solution techniques of the numerical formulation of the system of equations are presented.

  4. A Skew-Normal Mixture Regression Model

    ERIC Educational Resources Information Center

    Liu, Min; Lin, Tsung-I

    2014-01-01

    A challenge associated with traditional mixture regression models (MRMs), which rest on the assumption of normally distributed errors, is determining the number of unobserved groups. Specifically, even slight deviations from normality can lead to the detection of spurious classes. The current work aims to (a) examine how sensitive the commonly…

  5. Environmental Concern and Sociodemographic Variables: A Study of Statistical Models

    ERIC Educational Resources Information Center

    Xiao, Chenyang; McCright, Aaron M.

    2007-01-01

    Studies of the social bases of environmental concern over the past 30 years have produced somewhat inconsistent results regarding the effects of sociodemographic variables, such as gender, income, and place of residence. The authors argue that model specification errors resulting from violation of two statistical assumptions (interval-level…

  6. Final CSAPR Revisions Rule (77 FR 10324)

    EPA Pesticide Factsheets

    EPA finalizes revisions to the Transport Rule (76 FR 48208). These revisions address discrepancies in unit-specific modeling assumptions that affect the proper calculation of Transport Rule state budgets and assurance levels in several states.

  7. Impact of an equality constraint on the class-specific residual variances in regression mixtures: A Monte Carlo simulation study

    PubMed Central

    Kim, Minjung; Lamont, Andrea E.; Jaki, Thomas; Feaster, Daniel; Howe, George; Van Horn, M. Lee

    2015-01-01

    Regression mixture models are a novel approach for modeling heterogeneous effects of predictors on an outcome. In the model building process residual variances are often disregarded and simplifying assumptions made without thorough examination of the consequences. This simulation study investigated the impact of an equality constraint on the residual variances across latent classes. We examine the consequence of constraining the residual variances on class enumeration (finding the true number of latent classes) and parameter estimates under a number of different simulation conditions meant to reflect the type of heterogeneity likely to exist in applied analyses. Results showed that bias in class enumeration increased as the difference in residual variances between the classes increased. Also, an inappropriate equality constraint on the residual variances greatly impacted estimated class sizes and showed the potential to greatly impact parameter estimates in each class. Results suggest that it is important to make assumptions about residual variances with care and to carefully report what assumptions were made. PMID:26139512

  8. Institutional Approaches to Innovation and Change: A Review of the Esman Model of Institution Building.

    ERIC Educational Resources Information Center

    Bhola, H. S.

    The definitional and conceptual structure of the Esman model of institution building is described in great detail, emphasizing its philosophic and process assumptions and its latent dynamics. The author systematically critiques the Esman model in terms of its (1) specificity to the universe of institution building, (2) generalizability across…

  9. ASP-G: an ASP-based method for finding attractors in genetic regulatory networks

    PubMed Central

    Mushthofa, Mushthofa; Torres, Gustavo; Van de Peer, Yves; Marchal, Kathleen; De Cock, Martine

    2014-01-01

    Motivation: Boolean network models are suitable to simulate GRNs in the absence of detailed kinetic information. However, reducing the biological reality implies making assumptions on how genes interact (interaction rules) and how their state is updated during the simulation (update scheme). The exact choice of the assumptions largely determines the outcome of the simulations. In most cases, however, the biologically correct assumptions are unknown. An ideal simulation thus implies testing different rules and schemes to determine those that best capture an observed biological phenomenon. This is not trivial because most current methods to simulate Boolean network models of GRNs and to compute their attractors impose specific assumptions that cannot be easily altered, as they are built into the system. Results: To allow for a more flexible simulation framework, we developed ASP-G. We show the correctness of ASP-G in simulating Boolean network models and obtaining attractors under different assumptions by successfully recapitulating the detection of attractors of previously published studies. We also provide an example of how performing simulation of network models under different settings help determine the assumptions under which a certain conclusion holds. The main added value of ASP-G is in its modularity and declarativity, making it more flexible and less error-prone than traditional approaches. The declarative nature of ASP-G comes at the expense of being slower than the more dedicated systems but still achieves a good efficiency with respect to computational time. Availability and implementation: The source code of ASP-G is available at http://bioinformatics.intec.ugent.be/kmarchal/Supplementary_Information_Musthofa_2014/asp-g.zip. Contact: Kathleen.Marchal@UGent.be or Martine.DeCock@UGent.be Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25028722

  10. Hot spots in the microwave sky

    NASA Technical Reports Server (NTRS)

    Vittorio, Nicola; Juszkiewicz, Roman

    1987-01-01

    Tha assumption that the cosmic background fluctuations can be approximated as a random Gaussian field implies specific predictions for the radiation temperature pattern. Using this assumption, the abundances and angular sizes are calculated for regions of various levels of brightness expected to appear in the sky. Different observational strategies are assessed in the context of these results. Calculations for both large-angle and small-angle anisotropy generated by scale-invariant fluctuations in a flat universe are presented. Also discussed are simple generalizations to open cosmological models.

  11. Causal inferences on the effectiveness of complex social programs: Navigating assumptions, sources of complexity and evaluation design challenges.

    PubMed

    Chatterji, Madhabi

    2016-12-01

    This paper explores avenues for navigating evaluation design challenges posed by complex social programs (CSPs) and their environments when conducting studies that call for generalizable, causal inferences on the intervention's effectiveness. A definition is provided of a CSP drawing on examples from different fields, and an evaluation case is analyzed in depth to derive seven (7) major sources of complexity that typify CSPs, threatening assumptions of textbook-recommended experimental designs for performing impact evaluations. Theoretically-supported, alternative methodological strategies are discussed to navigate assumptions and counter the design challenges posed by the complex configurations and ecology of CSPs. Specific recommendations include: sequential refinement of the evaluation design through systems thinking, systems-informed logic modeling; and use of extended term, mixed methods (ETMM) approaches with exploratory and confirmatory phases of the evaluation. In the proposed approach, logic models are refined through direct induction and interactions with stakeholders. To better guide assumption evaluation, question-framing, and selection of appropriate methodological strategies, a multiphase evaluation design is recommended. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Causal Models for Mediation Analysis: An Introduction to Structural Mean Models.

    PubMed

    Zheng, Cheng; Atkins, David C; Zhou, Xiao-Hua; Rhew, Isaac C

    2015-01-01

    Mediation analyses are critical to understanding why behavioral interventions work. To yield a causal interpretation, common mediation approaches must make an assumption of "sequential ignorability." The current article describes an alternative approach to causal mediation called structural mean models (SMMs). A specific SMM called a rank-preserving model (RPM) is introduced in the context of an applied example. Particular attention is given to the assumptions of both approaches to mediation. Applying both mediation approaches to the college student drinking data yield notable differences in the magnitude of effects. Simulated examples reveal instances in which the traditional approach can yield strongly biased results, whereas the RPM approach remains unbiased in these cases. At the same time, the RPM approach has its own assumptions that must be met for correct inference, such as the existence of a covariate that strongly moderates the effect of the intervention on the mediator and no unmeasured confounders that also serve as a moderator of the effect of the intervention or the mediator on the outcome. The RPM approach to mediation offers an alternative way to perform mediation analysis when there may be unmeasured confounders.

  13. Group Facilitation: Functions and Skills.

    ERIC Educational Resources Information Center

    Anderson, L. Frances; Robertson, Sharon E.

    1985-01-01

    Discusses a model based on a specific set of assumptions about causality and effectiveness in interactional groups. Discusses personal qualities of group facilitators and proposes five major functions and seven skill clusters central to effective group facilitation. (Author/BH)

  14. The friable sponge model of a cometary nucleus

    NASA Technical Reports Server (NTRS)

    Horanyi, M.; Gombosi, T. I.; Korosmezey, A.; Kecskemety, K.; Szego, K.; Cravens, T. E.; Nagy, A. F.

    1984-01-01

    The mantle/core model of cometary nuclei, first suggested by Whipple and subsequently developed by Mendis and Brin, is modified and extended. New terms are added to the heat conduction equation for the mantle, which is solved in order to obtain the temperature distribution in the mantle and the gas production rate as a function of mantle thickness and heliocentric distance. These results are then combined with some specific assumptions about the mantle structure (the friable sponge model) in order to make predictions for the variation of gas production rate and mantle thickness as functions of heliocentric distance for different comets. A solution of the time-dependent heat conduction equation is presented in order to check some of the assumptions.

  15. Dynamic Network-Based Epistasis Analysis: Boolean Examples

    PubMed Central

    Azpeitia, Eugenio; Benítez, Mariana; Padilla-Longoria, Pablo; Espinosa-Soto, Carlos; Alvarez-Buylla, Elena R.

    2011-01-01

    In this article we focus on how the hierarchical and single-path assumptions of epistasis analysis can bias the inference of gene regulatory networks. Here we emphasize the critical importance of dynamic analyses, and specifically illustrate the use of Boolean network models. Epistasis in a broad sense refers to gene interactions, however, as originally proposed by Bateson, epistasis is defined as the blocking of a particular allelic effect due to the effect of another allele at a different locus (herein, classical epistasis). Classical epistasis analysis has proven powerful and useful, allowing researchers to infer and assign directionality to gene interactions. As larger data sets are becoming available, the analysis of classical epistasis is being complemented with computer science tools and system biology approaches. We show that when the hierarchical and single-path assumptions are not met in classical epistasis analysis, the access to relevant information and the correct inference of gene interaction topologies is hindered, and it becomes necessary to consider the temporal dynamics of gene interactions. The use of dynamical networks can overcome these limitations. We particularly focus on the use of Boolean networks that, like classical epistasis analysis, relies on logical formalisms, and hence can complement classical epistasis analysis and relax its assumptions. We develop a couple of theoretical examples and analyze them from a dynamic Boolean network model perspective. Boolean networks could help to guide additional experiments and discern among alternative regulatory schemes that would be impossible or difficult to infer without the elimination of these assumption from the classical epistasis analysis. We also use examples from the literature to show how a Boolean network-based approach has resolved ambiguities and guided epistasis analysis. Our article complements previous accounts, not only by focusing on the implications of the hierarchical and single-path assumption, but also by demonstrating the importance of considering temporal dynamics, and specifically introducing the usefulness of Boolean network models and also reviewing some key properties of network approaches. PMID:22645556

  16. A Simulation Study of Methods for Selecting Subgroup-Specific Doses in Phase I Trials

    PubMed Central

    Morita, Satoshi; Thall, Peter F.; Takeda, Kentaro

    2016-01-01

    Summary Patient heterogeneity may complicate dose-finding in phase I clinical trials if the dose-toxicity curves differ between subgroups. Conducting separate trials within subgroups may lead to infeasibly small sample sizes in subgroups having low prevalence. Alternatively, it is not obvious how to conduct a single trial while accounting for heterogeneity. To address this problem, we consider a generalization of the continual reassessment method (O’Quigley, et al., 1990) based on a hierarchical Bayesian dose-toxicity model that borrows strength between subgroups under the assumption that the subgroups are exchangeable. We evaluate a design using this model that includes subgroup-specific dose selection and safety rules. A simulation study is presented that includes comparison of this method to three alternative approaches, based on non-hierarchical models, that make different types of assumptions about within-subgroup dose-toxicity curves. The simulations show that the hierarchical model-based method is recommended in settings where the dose-toxicity curves are exchangeable between subgroups. We present practical guidelines for application, and provide computer programs for trial simulation and conduct. PMID:28111916

  17. Formal specification and verification of a fault-masking and transient-recovery model for digital flight-control systems

    NASA Technical Reports Server (NTRS)

    Rushby, John

    1991-01-01

    The formal specification and mechanically checked verification for a model of fault-masking and transient-recovery among the replicated computers of digital flight-control systems are presented. The verification establishes, subject to certain carefully stated assumptions, that faults among the component computers are masked so that commands sent to the actuators are the same as those that would be sent by a single computer that suffers no failures.

  18. Bayesian Estimation of Panel Data Fractional Response Models with Endogeneity: An Application to Standardized Test Rates

    ERIC Educational Resources Information Center

    Kessler, Lawrence M.

    2013-01-01

    In this paper I propose Bayesian estimation of a nonlinear panel data model with a fractional dependent variable (bounded between 0 and 1). Specifically, I estimate a panel data fractional probit model which takes into account the bounded nature of the fractional response variable. I outline estimation under the assumption of strict exogeneity as…

  19. Genetic and Environmental Influences of General Cognitive Ability: Is g a valid latent construct?

    PubMed Central

    Panizzon, Matthew S.; Vuoksimaa, Eero; Spoon, Kelly M.; Jacobson, Kristen C.; Lyons, Michael J.; Franz, Carol E.; Xian, Hong; Vasilopoulos, Terrie; Kremen, William S.

    2014-01-01

    Despite an extensive literature, the “g” construct remains a point of debate. Different models explaining the observed relationships among cognitive tests make distinct assumptions about the role of g in relation to those tests and specific cognitive domains. Surprisingly, these different models and their corresponding assumptions are rarely tested against one another. In addition to the comparison of distinct models, a multivariate application of the twin design offers a unique opportunity to test whether there is support for g as a latent construct with its own genetic and environmental influences, or whether the relationships among cognitive tests are instead driven by independent genetic and environmental factors. Here we tested multiple distinct models of the relationships among cognitive tests utilizing data from the Vietnam Era Twin Study of Aging (VETSA), a study of middle-aged male twins. Results indicated that a hierarchical (higher-order) model with a latent g phenotype, as well as specific cognitive domains, was best supported by the data. The latent g factor was highly heritable (86%), and accounted for most, but not all, of the genetic effects in specific cognitive domains and elementary cognitive tests. By directly testing multiple competing models of the relationships among cognitive tests in a genetically-informative design, we are able to provide stronger support than in prior studies for g being a valid latent construct. PMID:24791031

  20. How certain are the process parameterizations in our models?

    NASA Astrophysics Data System (ADS)

    Gharari, Shervan; Hrachowitz, Markus; Fenicia, Fabrizio; Matgen, Patrick; Razavi, Saman; Savenije, Hubert; Gupta, Hoshin; Wheater, Howard

    2016-04-01

    Environmental models are abstract simplifications of real systems. As a result, the elements of these models, including system architecture (structure), process parameterization and parameters inherit a high level of approximation and simplification. In a conventional model building exercise the parameter values are the only elements of a model which can vary while the rest of the modeling elements are often fixed a priori and therefore not subjected to change. Once chosen the process parametrization and model structure usually remains the same throughout the modeling process. The only flexibility comes from the changing parameter values, thereby enabling these models to reproduce the desired observation. This part of modeling practice, parameter identification and uncertainty, has attracted a significant attention in the literature during the last years. However what remains unexplored in our point of view is to what extent the process parameterization and system architecture (model structure) can support each other. In other words "Does a specific form of process parameterization emerge for a specific model given its system architecture and data while no or little assumption has been made about the process parameterization itself? In this study we relax the assumption regarding a specific pre-determined form for the process parameterizations of a rainfall/runoff model and examine how varying the complexity of the system architecture can lead to different or possibly contradictory parameterization forms than what would have been decided otherwise. This comparison implicitly and explicitly provides us with an assessment of how uncertain is our perception of model process parameterization in respect to the extent the data can support.

  1. Mixed infections reveal virulence differences between host-specific bee pathogens.

    PubMed

    Klinger, Ellen G; Vojvodic, Svjetlana; DeGrandi-Hoffman, Gloria; Welker, Dennis L; James, Rosalind R

    2015-07-01

    Dynamics of host-pathogen interactions are complex, often influencing the ecology, evolution and behavior of both the host and pathogen. In the natural world, infections with multiple pathogens are common, yet due to their complexity, interactions can be difficult to predict and study. Mathematical models help facilitate our understanding of these evolutionary processes, but empirical data are needed to test model assumptions and predictions. We used two common theoretical models regarding mixed infections (superinfection and co-infection) to determine which model assumptions best described a group of fungal pathogens closely associated with bees. We tested three fungal species, Ascosphaera apis, Ascosphaera aggregata and Ascosphaera larvis, in two bee hosts (Apis mellifera and Megachile rotundata). Bee survival was not significantly different in mixed infections vs. solo infections with the most virulent pathogen for either host, but fungal growth within the host was significantly altered by mixed infections. In the host A. mellifera, only the most virulent pathogen was present in the host post-infection (indicating superinfective properties). In M. rotundata, the most virulent pathogen co-existed with the lesser-virulent one (indicating co-infective properties). We demonstrated that the competitive outcomes of mixed infections were host-specific, indicating strong host specificity among these fungal bee pathogens. Published by Elsevier Inc.

  2. Fluid-Structure Interaction Modeling of Intracranial Aneurysm Hemodynamics: Effects of Different Assumptions

    NASA Astrophysics Data System (ADS)

    Rajabzadeh Oghaz, Hamidreza; Damiano, Robert; Meng, Hui

    2015-11-01

    Intracranial aneurysms (IAs) are pathological outpouchings of cerebral vessels, the progression of which are mediated by complex interactions between the blood flow and vasculature. Image-based computational fluid dynamics (CFD) has been used for decades to investigate IA hemodynamics. However, the commonly adopted simplifying assumptions in CFD (e.g. rigid wall) compromise the simulation accuracy and mask the complex physics involved in IA progression and eventual rupture. Several groups have considered the wall compliance by using fluid-structure interaction (FSI) modeling. However, FSI simulation is highly sensitive to numerical assumptions (e.g. linear-elastic wall material, Newtonian fluid, initial vessel configuration, and constant pressure outlet), the effects of which are poorly understood. In this study, a comprehensive investigation of the sensitivity of FSI simulations in patient-specific IAs is investigated using a multi-stage approach with a varying level of complexity. We start with simulations incorporating several common simplifications: rigid wall, Newtonian fluid, and constant pressure at the outlets, and then we stepwise remove these simplifications until the most comprehensive FSI simulations. Hemodynamic parameters such as wall shear stress and oscillatory shear index are assessed and compared at each stage to better understand the sensitivity of in FSI simulations for IA to model assumptions. Supported by the National Institutes of Health (1R01 NS 091075-01).

  3. Stimulus-specific variability in color working memory with delayed estimation.

    PubMed

    Bae, Gi-Yeul; Olkkonen, Maria; Allred, Sarah R; Wilson, Colin; Flombaum, Jonathan I

    2014-04-08

    Working memory for color has been the central focus in an ongoing debate concerning the structure and limits of visual working memory. Within this area, the delayed estimation task has played a key role. An implicit assumption in color working memory research generally, and delayed estimation in particular, is that the fidelity of memory does not depend on color value (and, relatedly, that experimental colors have been sampled homogeneously with respect to discriminability). This assumption is reflected in the common practice of collapsing across trials with different target colors when estimating memory precision and other model parameters. Here we investigated whether or not this assumption is secure. To do so, we conducted delayed estimation experiments following standard practice with a memory load of one. We discovered that different target colors evoked response distributions that differed widely in dispersion and that these stimulus-specific response properties were correlated across observers. Subsequent experiments demonstrated that stimulus-specific responses persist under higher memory loads and that at least part of the specificity arises in perception and is eventually propagated to working memory. Posthoc stimulus measurement revealed that rendered stimuli differed from nominal stimuli in both chromaticity and luminance. We discuss the implications of these deviations for both our results and those from other working memory studies.

  4. Common Cause Failure Modeling: Aerospace Versus Nuclear

    NASA Technical Reports Server (NTRS)

    Stott, James E.; Britton, Paul; Ring, Robert W.; Hark, Frank; Hatfield, G. Spencer

    2010-01-01

    Aggregate nuclear plant failure data is used to produce generic common-cause factors that are specifically for use in the common-cause failure models of NUREG/CR-5485. Furthermore, the models presented in NUREG/CR-5485 are specifically designed to incorporate two significantly distinct assumptions about the methods of surveillance testing from whence this aggregate failure data came. What are the implications of using these NUREG generic factors to model the common-cause failures of aerospace systems? Herein, the implications of using the NUREG generic factors in the modeling of aerospace systems are investigated in detail and strong recommendations for modeling the common-cause failures of aerospace systems are given.

  5. Impact of Different Levels of Epistemic Beliefs on Learning Processes and Outcomes in Vocational Education and Training

    ERIC Educational Resources Information Center

    Berding, Florian; Rolf-Wittlake, Katharina; Buschenlange, Janes

    2017-01-01

    Epistemic beliefs are individuals' beliefs about knowledge and knowing. Modelling them is currently based on two central assumptions. First, epistemic beliefs are conceptualized as a multi-level construct, i.e. they exist on a general, academic, domain-specific and/or topic-specific level. Second, research assumes that their more concrete levels…

  6. Methodology for Computational Fluid Dynamic Validation for Medical Use: Application to Intracranial Aneurysm.

    PubMed

    Paliwal, Nikhil; Damiano, Robert J; Varble, Nicole A; Tutino, Vincent M; Dou, Zhongwang; Siddiqui, Adnan H; Meng, Hui

    2017-12-01

    Computational fluid dynamics (CFD) is a promising tool to aid in clinical diagnoses of cardiovascular diseases. However, it uses assumptions that simplify the complexities of the real cardiovascular flow. Due to high-stakes in the clinical setting, it is critical to calculate the effect of these assumptions in the CFD simulation results. However, existing CFD validation approaches do not quantify error in the simulation results due to the CFD solver's modeling assumptions. Instead, they directly compare CFD simulation results against validation data. Thus, to quantify the accuracy of a CFD solver, we developed a validation methodology that calculates the CFD model error (arising from modeling assumptions). Our methodology identifies independent error sources in CFD and validation experiments, and calculates the model error by parsing out other sources of error inherent in simulation and experiments. To demonstrate the method, we simulated the flow field of a patient-specific intracranial aneurysm (IA) in the commercial CFD software star-ccm+. Particle image velocimetry (PIV) provided validation datasets for the flow field on two orthogonal planes. The average model error in the star-ccm+ solver was 5.63 ± 5.49% along the intersecting validation line of the orthogonal planes. Furthermore, we demonstrated that our validation method is superior to existing validation approaches by applying three representative existing validation techniques to our CFD and experimental dataset, and comparing the validation results. Our validation methodology offers a streamlined workflow to extract the "true" accuracy of a CFD solver.

  7. Impact of unseen assumptions on communication of atmospheric carbon mitigation options

    NASA Astrophysics Data System (ADS)

    Elliot, T. R.; Celia, M. A.; Court, B.

    2010-12-01

    With the rapid access and dissemination of information made available through online and digital pathways, there is need for a concurrent openness and transparency in communication of scientific investigation. Even with open communication it is essential that the scientific community continue to provide impartial result-driven information. An unknown factor in climate literacy is the influence of an impartial presentation of scientific investigation that has utilized biased base-assumptions. A formal publication appendix, and additional digital material, provides active investigators a suitable framework and ancillary material to make informed statements weighted by assumptions made in a study. However, informal media and rapid communiqués rarely make such investigatory attempts, often citing headline or key phrasing within a written work. This presentation is focused on Geologic Carbon Sequestration (GCS) as a proxy for the wider field of climate science communication, wherein we primarily investigate recent publications in GCS literature that produce scenario outcomes using apparently biased pro- or con- assumptions. A general review of scenario economics, capture process efficacy and specific examination of sequestration site assumptions and processes, reveals an apparent misrepresentation of what we consider to be a base-case GCS system. The authors demonstrate the influence of the apparent bias in primary assumptions on results from commonly referenced subsurface hydrology models. By use of moderate semi-analytical model simplification and Monte Carlo analysis of outcomes, we can establish the likely reality of any GCS scenario within a pragmatic middle ground. Secondarily, we review the development of publically available web-based computational tools and recent workshops where we presented interactive educational opportunities for public and institutional participants, with the goal of base-assumption awareness playing a central role. Through a series of interactive ‘what if’ scenarios, workshop participants were able to customize the models, which continue to be available from the Princeton University Subsurface Hydrology Research Group, and develop a better comprehension of subsurface factors contributing to GCS. Considering that the models are customizable, a simplified mock-up of regional GCS scenarios can be developed, which provides a possible pathway for informal, industrial, scientific or government communication of GCS concepts and likely scenarios. We believe continued availability, customizable scenarios, and simplifying assumptions are an exemplary means to communicate the possible outcome of CO2 sequestration projects; the associated risk; and, of no small importance, the consequences of base assumptions on predicted outcome.

  8. Reaction rates for mesoscopic reaction-diffusion kinetics

    DOE PAGES

    Hellander, Stefan; Hellander, Andreas; Petzold, Linda

    2015-02-23

    The mesoscopic reaction-diffusion master equation (RDME) is a popular modeling framework frequently applied to stochastic reaction-diffusion kinetics in systems biology. The RDME is derived from assumptions about the underlying physical properties of the system, and it may produce unphysical results for models where those assumptions fail. In that case, other more comprehensive models are better suited, such as hard-sphere Brownian dynamics (BD). Although the RDME is a model in its own right, and not inferred from any specific microscale model, it proves useful to attempt to approximate a microscale model by a specific choice of mesoscopic reaction rates. In thismore » paper we derive mesoscopic scale-dependent reaction rates by matching certain statistics of the RDME solution to statistics of the solution of a widely used microscopic BD model: the Smoluchowski model with a Robin boundary condition at the reaction radius of two molecules. We also establish fundamental limits on the range of mesh resolutions for which this approach yields accurate results and show both theoretically and in numerical examples that as we approach the lower fundamental limit, the mesoscopic dynamics approach the microscopic dynamics. Finally, we show that for mesh sizes below the fundamental lower limit, results are less accurate. Thus, the lower limit determines the mesh size for which we obtain the most accurate results.« less

  9. Reaction rates for mesoscopic reaction-diffusion kinetics

    PubMed Central

    Hellander, Stefan; Hellander, Andreas; Petzold, Linda

    2016-01-01

    The mesoscopic reaction-diffusion master equation (RDME) is a popular modeling framework frequently applied to stochastic reaction-diffusion kinetics in systems biology. The RDME is derived from assumptions about the underlying physical properties of the system, and it may produce unphysical results for models where those assumptions fail. In that case, other more comprehensive models are better suited, such as hard-sphere Brownian dynamics (BD). Although the RDME is a model in its own right, and not inferred from any specific microscale model, it proves useful to attempt to approximate a microscale model by a specific choice of mesoscopic reaction rates. In this paper we derive mesoscopic scale-dependent reaction rates by matching certain statistics of the RDME solution to statistics of the solution of a widely used microscopic BD model: the Smoluchowski model with a Robin boundary condition at the reaction radius of two molecules. We also establish fundamental limits on the range of mesh resolutions for which this approach yields accurate results and show both theoretically and in numerical examples that as we approach the lower fundamental limit, the mesoscopic dynamics approach the microscopic dynamics. We show that for mesh sizes below the fundamental lower limit, results are less accurate. Thus, the lower limit determines the mesh size for which we obtain the most accurate results. PMID:25768640

  10. Of mental models, assumptions and heuristics: The case of acids and acid strength

    NASA Astrophysics Data System (ADS)

    McClary, Lakeisha Michelle

    This study explored what cognitive resources (i.e., units of knowledge necessary to learn) first-semester organic chemistry students used to make decisions about acid strength and how those resources guided the prediction, explanation and justification of trends in acid strength. We were specifically interested in the identifying and characterizing the mental models, assumptions and heuristics that students relied upon to make their decisions, in most cases under time constraints. The views about acids and acid strength were investigated for twenty undergraduate students. Data sources for this study included written responses and individual interviews. The data was analyzed using a qualitative methodology to answer five research questions. Data analysis regarding these research questions was based on existing theoretical frameworks: problem representation (Chi, Feltovich & Glaser, 1981), mental models (Johnson-Laird, 1983); intuitive assumptions (Talanquer, 2006), and heuristics (Evans, 2008). These frameworks were combined to develop the framework from which our data were analyzed. Results indicated that first-semester organic chemistry students' use of cognitive resources was complex and dependent on their understanding of the behavior of acids. Expressed mental models were generated using prior knowledge and assumptions about acids and acid strength; these models were then employed to make decisions. Explicit and implicit features of the compounds in each task mediated participants' attention, which triggered the use of a very limited number of heuristics, or shortcut reasoning strategies. Many students, however, were able to apply more effortful analytic reasoning, though correct trends were predicted infrequently. Most students continued to use their mental models, assumptions and heuristics to explain a given trend in acid strength and to justify their predicted trends, but the tasks influenced a few students to shift from one model to another model. An emergent finding from this project was that the problem representation greatly influenced students' ability to make correct predictions in acid strength. Many students, however, were able to apply more effortful analytic reasoning, though correct trends were predicted infrequently. Most students continued to use their mental models, assumptions and heuristics to explain a given trend in acid strength and to justify their predicted trends, but the tasks influenced a few students to shift from one model to another model. An emergent finding from this project was that the problem representation greatly influenced students' ability to make correct predictions in acid strength.

  11. Selecting a distributional assumption for modelling relative densities of benthic macroinvertebrates

    USGS Publications Warehouse

    Gray, B.R.

    2005-01-01

    The selection of a distributional assumption suitable for modelling macroinvertebrate density data is typically challenging. Macroinvertebrate data often exhibit substantially larger variances than expected under a standard count assumption, that of the Poisson distribution. Such overdispersion may derive from multiple sources, including heterogeneity of habitat (historically and spatially), differing life histories for organisms collected within a single collection in space and time, and autocorrelation. Taken to extreme, heterogeneity of habitat may be argued to explain the frequent large proportions of zero observations in macroinvertebrate data. Sampling locations may consist of habitats defined qualitatively as either suitable or unsuitable. The former category may yield random or stochastic zeroes and the latter structural zeroes. Heterogeneity among counts may be accommodated by treating the count mean itself as a random variable, while extra zeroes may be accommodated using zero-modified count assumptions, including zero-inflated and two-stage (or hurdle) approaches. These and linear assumptions (following log- and square root-transformations) were evaluated using 9 years of mayfly density data from a 52 km, ninth-order reach of the Upper Mississippi River (n = 959). The data exhibited substantial overdispersion relative to that expected under a Poisson assumption (i.e. variance:mean ratio = 23 ??? 1), and 43% of the sampling locations yielded zero mayflies. Based on the Akaike Information Criterion (AIC), count models were improved most by treating the count mean as a random variable (via a Poisson-gamma distributional assumption) and secondarily by zero modification (i.e. improvements in AIC values = 9184 units and 47-48 units, respectively). Zeroes were underestimated by the Poisson, log-transform and square root-transform models, slightly by the standard negative binomial model but not by the zero-modified models (61%, 24%, 32%, 7%, and 0%, respectively). However, the zero-modified Poisson models underestimated small counts (1 ??? y ??? 4) and overestimated intermediate counts (7 ??? y ??? 23). Counts greater than zero were estimated well by zero-modified negative binomial models, while counts greater than one were also estimated well by the standard negative binomial model. Based on AIC and percent zero estimation criteria, the two-stage and zero-inflated models performed similarly. The above inferences were largely confirmed when the models were used to predict values from a separate, evaluation data set (n = 110). An exception was that, using the evaluation data set, the standard negative binomial model appeared superior to its zero-modified counterparts using the AIC (but not percent zero criteria). This and other evidence suggest that a negative binomial distributional assumption should be routinely considered when modelling benthic macroinvertebrate data from low flow environments. Whether negative binomial models should themselves be routinely examined for extra zeroes requires, from a statistical perspective, more investigation. However, this question may best be answered by ecological arguments that may be specific to the sampled species and locations. ?? 2004 Elsevier B.V. All rights reserved.

  12. Identifying fMRI Model Violations with Lagrange Multiplier Tests

    PubMed Central

    Cassidy, Ben; Long, Christopher J; Rae, Caroline; Solo, Victor

    2013-01-01

    The standard modeling framework in Functional Magnetic Resonance Imaging (fMRI) is predicated on assumptions of linearity, time invariance and stationarity. These assumptions are rarely checked because doing so requires specialised software, although failure to do so can lead to bias and mistaken inference. Identifying model violations is an essential but largely neglected step in standard fMRI data analysis. Using Lagrange Multiplier testing methods we have developed simple and efficient procedures for detecting model violations such as non-linearity, non-stationarity and validity of the common Double Gamma specification for hemodynamic response. These procedures are computationally cheap and can easily be added to a conventional analysis. The test statistic is calculated at each voxel and displayed as a spatial anomaly map which shows regions where a model is violated. The methodology is illustrated with a large number of real data examples. PMID:22542665

  13. THE MAYAK WORKER DOSIMETRY SYSTEM (MWDS-2013) FOR INTERNALLY DEPOSITED PLUTONIUM: AN OVERVIEW

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Birchall, A.; Vostrotin, V.; Puncher, M.

    The Mayak Worker Dosimetry System (MWDS-2013) is a system for interpreting measurement data from Mayak workers from both internal and external sources. This paper is concerned with the calculation of annual organ doses for Mayak workers exposed to plutonium aerosols, where the measurement data consists mainly of activity of plutonium in urine samples. The system utilises the latest biokinetic and dosimetric models, and unlike its predecessors, takes explicit account of uncertainties in both the measurement data and model parameters. The aim of this paper is to describe the complete MWDS-2013 system (including model parameter values and their uncertainties) and themore » methodology used (including all the relevant equations) and the assumptions made. Where necessary, supplementary papers which justify specific assumptions are cited.« less

  14. TEACHER-ADVISORS: Where There's a Skill There's A Way.

    ERIC Educational Resources Information Center

    Tamminen, Armas; And Others

    This report discusses a program to present the Teacher Advisement Training Model. This model for training teacher-advisors is based on the assumption that tentative commitment to making school a more rewarding experience for all is the first step in starting an effective program. The approach is to help teachers learn specific skills and methods…

  15. On the Kubo-Greenwood model for electron conductivity

    NASA Astrophysics Data System (ADS)

    Dufty, James; Wrighton, Jeffrey; Luo, Kai; Trickey, S. B.

    2018-02-01

    Currently, the most common method to calculate transport properties for materials under extreme conditions is based on the phenomenological Kubo-Greenwood method. The results of an inquiry into the justification and context of that model are summarized here. Specifically, the basis for its connection to equilibrium DFT and the assumption of static ions are discussed briefly.

  16. A priori motion models for four-dimensional reconstruction in gated cardiac SPECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lalush, D.S.; Tsui, B.M.W.; Cui, Lin

    1996-12-31

    We investigate the benefit of incorporating a priori assumptions about cardiac motion in a fully four-dimensional (4D) reconstruction algorithm for gated cardiac SPECT. Previous work has shown that non-motion-specific 4D Gibbs priors enforcing smoothing in time and space can control noise while preserving resolution. In this paper, we evaluate methods for incorporating known heart motion in the Gibbs prior model. The new model is derived by assigning motion vectors to each 4D voxel, defining the movement of that volume of activity into the neighboring time frames. Weights for the Gibbs cliques are computed based on these {open_quotes}most likely{close_quotes} motion vectors.more » To evaluate, we employ the mathematical cardiac-torso (MCAT) phantom with a new dynamic heart model that simulates the beating and twisting motion of the heart. Sixteen realistically-simulated gated datasets were generated, with noise simulated to emulate a real Tl-201 gated SPECT study. Reconstructions were performed using several different reconstruction algorithms, all modeling nonuniform attenuation and three-dimensional detector response. These include ML-EM with 4D filtering, 4D MAP-EM without prior motion assumption, and 4D MAP-EM with prior motion assumptions. The prior motion assumptions included both the correct motion model and incorrect models. Results show that reconstructions using the 4D prior model can smooth noise and preserve time-domain resolution more effectively than 4D linear filters. We conclude that modeling of motion in 4D reconstruction algorithms can be a powerful tool for smoothing noise and preserving temporal resolution in gated cardiac studies.« less

  17. Comparing models of change to estimate the mediated effect in the pretest-posttest control group design

    PubMed Central

    Valente, Matthew J.; MacKinnon, David P.

    2017-01-01

    Models to assess mediation in the pretest-posttest control group design are understudied in the behavioral sciences even though it is the design of choice for evaluating experimental manipulations. The paper provides analytical comparisons of the four most commonly used models used to estimate the mediated effect in this design: Analysis of Covariance (ANCOVA), difference score, residualized change score, and cross-sectional model. Each of these models are fitted using a Latent Change Score specification and a simulation study assessed bias, Type I error, power, and confidence interval coverage of the four models. All but the ANCOVA model make stringent assumptions about the stability and cross-lagged relations of the mediator and outcome that may not be plausible in real-world applications. When these assumptions do not hold, Type I error and statistical power results suggest that only the ANCOVA model has good performance. The four models are applied to an empirical example. PMID:28845097

  18. Comparing models of change to estimate the mediated effect in the pretest-posttest control group design.

    PubMed

    Valente, Matthew J; MacKinnon, David P

    2017-01-01

    Models to assess mediation in the pretest-posttest control group design are understudied in the behavioral sciences even though it is the design of choice for evaluating experimental manipulations. The paper provides analytical comparisons of the four most commonly used models used to estimate the mediated effect in this design: Analysis of Covariance (ANCOVA), difference score, residualized change score, and cross-sectional model. Each of these models are fitted using a Latent Change Score specification and a simulation study assessed bias, Type I error, power, and confidence interval coverage of the four models. All but the ANCOVA model make stringent assumptions about the stability and cross-lagged relations of the mediator and outcome that may not be plausible in real-world applications. When these assumptions do not hold, Type I error and statistical power results suggest that only the ANCOVA model has good performance. The four models are applied to an empirical example.

  19. Impact of an equality constraint on the class-specific residual variances in regression mixtures: A Monte Carlo simulation study.

    PubMed

    Kim, Minjung; Lamont, Andrea E; Jaki, Thomas; Feaster, Daniel; Howe, George; Van Horn, M Lee

    2016-06-01

    Regression mixture models are a novel approach to modeling the heterogeneous effects of predictors on an outcome. In the model-building process, often residual variances are disregarded and simplifying assumptions are made without thorough examination of the consequences. In this simulation study, we investigated the impact of an equality constraint on the residual variances across latent classes. We examined the consequences of constraining the residual variances on class enumeration (finding the true number of latent classes) and on the parameter estimates, under a number of different simulation conditions meant to reflect the types of heterogeneity likely to exist in applied analyses. The results showed that bias in class enumeration increased as the difference in residual variances between the classes increased. Also, an inappropriate equality constraint on the residual variances greatly impacted on the estimated class sizes and showed the potential to greatly affect the parameter estimates in each class. These results suggest that it is important to make assumptions about residual variances with care and to carefully report what assumptions are made.

  20. Is the person-situation debate important for agent-based modeling and vice-versa?

    PubMed

    Sznajd-Weron, Katarzyna; Szwabiński, Janusz; Weron, Rafał

    2014-01-01

    Agent-based models (ABM) are believed to be a very powerful tool in the social sciences, sometimes even treated as a substitute for social experiments. When building an ABM we have to define the agents and the rules governing the artificial society. Given the complexity and our limited understanding of the human nature, we face the problem of assuming that either personal traits, the situation or both have impact on the social behavior of agents. However, as the long-standing person-situation debate in psychology shows, there is no consensus as to the underlying psychological mechanism and the important question that arises is whether the modeling assumptions we make will have a substantial influence on the simulated behavior of the system as a whole or not. Studying two variants of the same agent-based model of opinion formation, we show that the decision to choose either personal traits or the situation as the primary factor driving social interactions is of critical importance. Using Monte Carlo simulations (for Barabasi-Albert networks) and analytic calculations (for a complete graph) we provide evidence that assuming a person-specific response to social influence at the microscopic level generally leads to a completely different and less realistic aggregate or macroscopic behavior than an assumption of a situation-specific response; a result that has been reported by social psychologists for a range of experimental setups, but has been downplayed or ignored in the opinion dynamics literature. This sensitivity to modeling assumptions has far reaching consequences also beyond opinion dynamics, since agent-based models are becoming a popular tool among economists and policy makers and are often used as substitutes of real social experiments.

  1. Thorough specification of the neurophysiologic processes underlying behavior and of their manifestation in EEG - demonstration with the go/no-go task.

    PubMed

    Shahaf, Goded; Pratt, Hillel

    2013-01-01

    In this work we demonstrate the principles of a systematic modeling approach of the neurophysiologic processes underlying a behavioral function. The modeling is based upon a flexible simulation tool, which enables parametric specification of the underlying neurophysiologic characteristics. While the impact of selecting specific parameters is of interest, in this work we focus on the insights, which emerge from rather accepted assumptions regarding neuronal representation. We show that harnessing of even such simple assumptions enables the derivation of significant insights regarding the nature of the neurophysiologic processes underlying behavior. We demonstrate our approach in some detail by modeling the behavioral go/no-go task. We further demonstrate the practical significance of this simplified modeling approach in interpreting experimental data - the manifestation of these processes in the EEG and ERP literature of normal and abnormal (ADHD) function, as well as with comprehensive relevant ERP data analysis. In-fact we show that from the model-based spatiotemporal segregation of the processes, it is possible to derive simple and yet effective and theory-based EEG markers differentiating normal and ADHD subjects. We summarize by claiming that the neurophysiologic processes modeled for the go/no-go task are part of a limited set of neurophysiologic processes which underlie, in a variety of combinations, any behavioral function with measurable operational definition. Such neurophysiologic processes could be sampled directly from EEG on the basis of model-based spatiotemporal segregation.

  2. C/sup 3/ and combat simulation - a survey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson, S.A. Jr.

    1983-01-04

    This article looks at the overlap between C/sup 3/ and combat simulation, from the point of view of the developer of combat simulations and models. In this context, there are two different questions. The first is: How and to what extent should specific models of the C/sup 3/ processes be incorporated in simulations of combat. Here the key point is the assessment of impact. In which types or levels of combat does C/sup 3/ play a role sufficiently intricate and closely coupled with combat performance that it would significantly affect combat results. Conversely, when is C/sup 3/ a known factormore » or modifier which can be simply accommodated without a specific detailed model being made for it. The second question is the inverse one. In the development of future C/sup 3/ systems, what rule should combat simulation play. Obviously, simulation of the operation of the hardware, software and other parts of the C/sup 3/ system would be useful in its design and specification, but this is not combat simulation. When is it necessary to encase the C/sup 3/ simulation model in a combat model which has enough detail to be considered a simulation itself. How should this outer combat model be scoped out as to the components needed. In order to build a background for answering these questions a two-pronged approach will be taken. First a framework for C/sup 3/ modeling will be developed, in which the various types of modeling which can be done to include or encase C/sup 3/ in a combat model are organized. This framework will hopefully be useful in describing the particular assumptions made in specific models in terms of what could be done in a more general way. Then a few specific models will be described, concentrating on the C/sup 3/ portion of the simulations, or what could be interpreted as the C/sup 3/ assumptions.« less

  3. Validation by simulation of a clinical trial model using the standardized mean and variance criteria.

    PubMed

    Abbas, Ismail; Rovira, Joan; Casanovas, Josep

    2006-12-01

    To develop and validate a model of a clinical trial that evaluates the changes in cholesterol level as a surrogate marker for lipodystrophy in HIV subjects under alternative antiretroviral regimes, i.e., treatment with Protease Inhibitors vs. a combination of nevirapine and other antiretroviral drugs. Five simulation models were developed based on different assumptions, on treatment variability and pattern of cholesterol reduction over time. The last recorded cholesterol level, the difference from the baseline, the average difference from the baseline and level evolution, are the considered endpoints. Specific validation criteria based on a 10% minus or plus standardized distance in means and variances were used to compare the real and the simulated data. The validity criterion was met by all models for considered endpoints. However, only two models met the validity criterion when all endpoints were considered. The model based on the assumption that within-subjects variability of cholesterol levels changes over time is the one that minimizes the validity criterion, standardized distance equal to or less than 1% minus or plus. Simulation is a useful technique for calibration, estimation, and evaluation of models, which allows us to relax the often overly restrictive assumptions regarding parameters required by analytical approaches. The validity criterion can also be used to select the preferred model for design optimization, until additional data are obtained allowing an external validation of the model.

  4. Asymmetries in the Processing of Vowel Height

    ERIC Educational Resources Information Center

    Scharinger, Mathias; Monahan, Philip J.; Idsardi, William J.

    2012-01-01

    Purpose: Speech perception can be described as the transformation of continuous acoustic information into discrete memory representations. Therefore, research on neural representations of speech sounds is particularly important for a better understanding of this transformation. Speech perception models make specific assumptions regarding the…

  5. The problem of the second wind turbine - a note on a common but flawed wind power estimation method

    NASA Astrophysics Data System (ADS)

    Gans, F.; Miller, L. M.; Kleidon, A.

    2012-06-01

    Several recent wind power estimates suggest that this renewable energy resource can meet all of the current and future global energy demand with little impact on the atmosphere. These estimates are calculated using observed wind speeds in combination with specifications of wind turbine size and density to quantify the extractable wind power. However, this approach neglects the effects of momentum extraction by the turbines on the atmospheric flow that would have effects outside the turbine wake. Here we show with a simple momentum balance model of the atmospheric boundary layer that this common methodology to derive wind power potentials requires unrealistically high increases in the generation of kinetic energy by the atmosphere. This increase by an order of magnitude is needed to ensure momentum conservation in the atmospheric boundary layer. In the context of this simple model, we then compare the effect of three different assumptions regarding the boundary conditions at the top of the boundary layer, with prescribed hub height velocity, momentum transport, or kinetic energy transfer into the boundary layer. We then use simulations with an atmospheric general circulation model that explicitly simulate generation of kinetic energy with momentum conservation. These simulations show that the assumption of prescribed momentum import into the atmospheric boundary layer yields the most realistic behavior of the simple model, while the assumption of prescribed hub height velocity can clearly be disregarded. We also show that the assumptions yield similar estimates for extracted wind power when less than 10% of the kinetic energy flux in the boundary layer is extracted by the turbines. We conclude that the common method significantly overestimates wind power potentials by an order of magnitude in the limit of high wind power extraction. Ultimately, environmental constraints set the upper limit on wind power potential at larger scales rather than detailed engineering specifications of wind turbine design and placement.

  6. Extrapolating Survival from Randomized Trials Using External Data: A Review of Methods

    PubMed Central

    Jackson, Christopher; Stevens, John; Ren, Shijie; Latimer, Nick; Bojke, Laura; Manca, Andrea; Sharples, Linda

    2016-01-01

    This article describes methods used to estimate parameters governing long-term survival, or times to other events, for health economic models. Specifically, the focus is on methods that combine shorter-term individual-level survival data from randomized trials with longer-term external data, thus using the longer-term data to aid extrapolation of the short-term data. This requires assumptions about how trends in survival for each treatment arm will continue after the follow-up period of the trial. Furthermore, using external data requires assumptions about how survival differs between the populations represented by the trial and external data. Study reports from a national health technology assessment program in the United Kingdom were searched, and the findings were combined with “pearl-growing” searches of the academic literature. We categorized the methods that have been used according to the assumptions they made about how the hazards of death vary between the external and internal data and through time, and we discuss the appropriateness of the assumptions in different circumstances. Modeling choices, parameter estimation, and characterization of uncertainty are discussed, and some suggestions for future research priorities in this area are given. PMID:27005519

  7. Constructing inquiry: One school's journey to develop an inquiry-based school for teachers and students

    NASA Astrophysics Data System (ADS)

    Sisk-Hilton, Stephanie Lee

    This study examines the two way relationship between an inquiry-based professional development model and teacher enactors. The two year study follows a group of teachers enacting the emergent Supporting Knowledge Integration for Inquiry Practice (SKIIP) professional development model. This study seeks to: (a) identify activity structures in the model that interact with teachers' underlying assumptions regarding professional development and inquiry learning; (b) explain key decision points during implementation in terms of these underlying assumptions; and (c) examine the impact of key activity structures on individual teachers' stated belief structures regarding inquiry learning. Linn's knowledge integration framework facilitates description and analysis of teacher development. Three sets of tensions emerge as themes that describe and constrain participants' interaction with and learning through the model. These are: learning from the group vs. learning on one's own; choosing and evaluating evidence based on impressions vs. specific criteria; and acquiring new knowledge vs. maintaining feelings of autonomy and efficacy. In each of these tensions, existing group goals and operating assumptions initially fell at one end of the tension, while the professional development goals and forms fell at the other. Changes to the model occurred as participants reacted to and negotiated these points of tension. As the group engaged in and modified the SKIIP model, they had repeated opportunities to articulate goals and to make connections between goals and model activity structures. Over time, decisions to modify the model took into consideration an increasingly complex set of underlying assumptions and goals. Teachers identified and sought to balance these tensions. This led to more complex and nuanced decision making, which reflected growing capacity to consider multiple goals in choosing activity structures to enact. The study identifies key activity structures that scaffolded this process for teachers, and which ultimately promoted knowledge integration at both the group and individual levels. This study is an "extreme case" which examines implementation of the SKIIP model under very favorable conditions. Lessons learned regarding appropriate levels of model responsiveness, likely areas of conflict between model form and teacher underlying assumptions, and activity structures that scaffold knowledge integration provide a starting point for future, larger scale implementation.

  8. Statistical Mechanical Derivation of Jarzynski's Identity for Thermostated Non-Hamiltonian Dynamics

    NASA Astrophysics Data System (ADS)

    Cuendet, Michel A.

    2006-03-01

    The recent Jarzynski identity (JI) relates thermodynamic free energy differences to nonequilibrium work averages. Several proofs of the JI have been provided on the thermodynamic level. They rely on assumptions such as equivalence of ensembles in the thermodynamic limit or weakly coupled infinite heat baths. However, the JI is widely applied to NVT computer simulations involving finite numbers of particles, whose equations of motion are strongly coupled to a few extra degrees of freedom modeling a thermostat. In this case, the above assumptions are no longer valid. We propose a statistical mechanical approach to the JI solely based on the specific equations of motion, without any further assumption. We provide a detailed derivation for the non-Hamiltonian Nosé-Hoover dynamics, which is routinely used in computer simulations to produce canonical sampling.

  9. Swimmer illness associated with marine water exposure and water quality indicators: impact of widely used assumptions

    EPA Science Inventory

    Studies of health risks associated with recreational water exposure require investigators to make choices about water quality indicator averaging techniques, exposure definitions, follow-up periods, and model specifications; but, investigators seldom describe the impact of these ...

  10. Optimal policy for value-based decision-making.

    PubMed

    Tajima, Satohiro; Drugowitsch, Jan; Pouget, Alexandre

    2016-08-18

    For decades now, normative theories of perceptual decisions, and their implementation as drift diffusion models, have driven and significantly improved our understanding of human and animal behaviour and the underlying neural processes. While similar processes seem to govern value-based decisions, we still lack the theoretical understanding of why this ought to be the case. Here, we show that, similar to perceptual decisions, drift diffusion models implement the optimal strategy for value-based decisions. Such optimal decisions require the models' decision boundaries to collapse over time, and to depend on the a priori knowledge about reward contingencies. Diffusion models only implement the optimal strategy under specific task assumptions, and cease to be optimal once we start relaxing these assumptions, by, for example, using non-linear utility functions. Our findings thus provide the much-needed theory for value-based decisions, explain the apparent similarity to perceptual decisions, and predict conditions under which this similarity should break down.

  11. Optimal policy for value-based decision-making

    PubMed Central

    Tajima, Satohiro; Drugowitsch, Jan; Pouget, Alexandre

    2016-01-01

    For decades now, normative theories of perceptual decisions, and their implementation as drift diffusion models, have driven and significantly improved our understanding of human and animal behaviour and the underlying neural processes. While similar processes seem to govern value-based decisions, we still lack the theoretical understanding of why this ought to be the case. Here, we show that, similar to perceptual decisions, drift diffusion models implement the optimal strategy for value-based decisions. Such optimal decisions require the models' decision boundaries to collapse over time, and to depend on the a priori knowledge about reward contingencies. Diffusion models only implement the optimal strategy under specific task assumptions, and cease to be optimal once we start relaxing these assumptions, by, for example, using non-linear utility functions. Our findings thus provide the much-needed theory for value-based decisions, explain the apparent similarity to perceptual decisions, and predict conditions under which this similarity should break down. PMID:27535638

  12. An equal force theory for network models of soft materials with arbitrary molecular weight distribution

    NASA Astrophysics Data System (ADS)

    Verron, E.; Gros, A.

    2017-09-01

    Most network models for soft materials, e.g. elastomers and gels, are dedicated to idealized materials: all chains admit the same number of Kuhn segments. Nevertheless, such standard models are not appropriate for materials involving multiple networks, and some specific constitutive equations devoted to these materials have been derived in the last few years. In nearly all cases, idealized networks of different chain lengths are assembled following an equal strain assumption; only few papers adopt an equal stress assumption, although some authors argue that such hypothesis would reflect the equilibrium of the different networks in contact. In this work, a full-network model with an arbitrary chain length distribution is derived by considering that chains of different lengths satisfy the equal force assumption in each direction of the unit sphere. The derivation is restricted to non-Gaussian freely jointed chains and to affine deformation of the sphere. Firstly, after a proper definition of the undeformed configuration of the network, we demonstrate that the equal force assumption leads to the equality of a normalized stretch in chains of different lengths. Secondly, we establish that the network with chain length distribution behaves as an idealized full-network of which both chain length and density of are provided by the chain length distribution. This approach is finally illustrated with two examples: the derivation of a new expression for the Young modulus of bimodal interpenetrated polymer networks, and the prediction of the change in fluorescence during deformation of mechanochemically responsive elastomers.

  13. Pairing field methods to improve inference in wildlife surveys while accommodating detection covariance

    USGS Publications Warehouse

    Clare, John; McKinney, Shawn T.; DePue, John E.; Loftin, Cynthia S.

    2017-01-01

    It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture–recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters.

  14. A Critique of a Phenomenological Fiber Breakage Model for Stress Rupture of Composite Materials

    NASA Technical Reports Server (NTRS)

    Reeder, James R.

    2010-01-01

    Stress rupture is not a critical failure mode for most composite structures, but there are a few applications where it can be critical. One application where stress rupture can be a critical design issue is in Composite Overwrapped Pressure Vessels (COPV's), where the composite material is highly and uniformly loaded for long periods of time and where very high reliability is required. COPV's are normally required to be proof loaded before being put into service to insure strength, but it is feared that the proof load may cause damage that reduces the stress rupture reliability. Recently, a fiber breakage model was proposed specifically to estimate a reduced reliability due to proof loading. The fiber breakage model attempts to model physics believed to occur at the microscopic scale, but validation of the model has not occurred. In this paper, the fiber breakage model is re-derived while highlighting assumptions that were made during the derivation. Some of the assumptions are examined to assess their effect on the final predicted reliability.

  15. Exploring Duopoly Markets with Conjectural Variations

    ERIC Educational Resources Information Center

    Julien, Ludovic A.; Musy, Olivier; Saïdi, Aurélien W.

    2014-01-01

    In this article, the authors investigate competitive firm behaviors in a two-firm environment assuming linear cost and demand functions. By introducing conjectural variations, they capture the different market structures as specific configurations of a more general model. Conjectural variations are based on the assumption that each firm believes…

  16. The Cost and Impact of Scaling Up Pre-exposure Prophylaxis for HIV Prevention: A Systematic Review of Cost-Effectiveness Modelling Studies

    PubMed Central

    Gomez, Gabriela B.; Borquez, Annick; Case, Kelsey K.; Wheelock, Ana; Vassall, Anna; Hankins, Catherine

    2013-01-01

    Background Cost-effectiveness studies inform resource allocation, strategy, and policy development. However, due to their complexity, dependence on assumptions made, and inherent uncertainty, synthesising, and generalising the results can be difficult. We assess cost-effectiveness models evaluating expected health gains and costs of HIV pre-exposure prophylaxis (PrEP) interventions. Methods and Findings We conducted a systematic review comparing epidemiological and economic assumptions of cost-effectiveness studies using various modelling approaches. The following databases were searched (until January 2013): PubMed/Medline, ISI Web of Knowledge, Centre for Reviews and Dissemination databases, EconLIT, and region-specific databases. We included modelling studies reporting both cost and expected impact of a PrEP roll-out. We explored five issues: prioritisation strategies, adherence, behaviour change, toxicity, and resistance. Of 961 studies retrieved, 13 were included. Studies modelled populations (heterosexual couples, men who have sex with men, people who inject drugs) in generalised and concentrated epidemics from Southern Africa (including South Africa), Ukraine, USA, and Peru. PrEP was found to have the potential to be a cost-effective addition to HIV prevention programmes in specific settings. The extent of the impact of PrEP depended upon assumptions made concerning cost, epidemic context, programme coverage, prioritisation strategies, and individual-level adherence. Delivery of PrEP to key populations at highest risk of HIV exposure appears the most cost-effective strategy. Limitations of this review include the partial geographical coverage, our inability to perform a meta-analysis, and the paucity of information available exploring trade-offs between early treatment and PrEP. Conclusions Our review identifies the main considerations to address in assessing cost-effectiveness analyses of a PrEP intervention—cost, epidemic context, individual adherence level, PrEP programme coverage, and prioritisation strategy. Cost-effectiveness studies indicating where resources can be applied for greatest impact are essential to guide resource allocation decisions; however, the results of such analyses must be considered within the context of the underlying assumptions made. Please see later in the article for the Editors' Summary PMID:23554579

  17. The cost and impact of scaling up pre-exposure prophylaxis for HIV prevention: a systematic review of cost-effectiveness modelling studies.

    PubMed

    Gomez, Gabriela B; Borquez, Annick; Case, Kelsey K; Wheelock, Ana; Vassall, Anna; Hankins, Catherine

    2013-01-01

    Cost-effectiveness studies inform resource allocation, strategy, and policy development. However, due to their complexity, dependence on assumptions made, and inherent uncertainty, synthesising, and generalising the results can be difficult. We assess cost-effectiveness models evaluating expected health gains and costs of HIV pre-exposure prophylaxis (PrEP) interventions. We conducted a systematic review comparing epidemiological and economic assumptions of cost-effectiveness studies using various modelling approaches. The following databases were searched (until January 2013): PubMed/Medline, ISI Web of Knowledge, Centre for Reviews and Dissemination databases, EconLIT, and region-specific databases. We included modelling studies reporting both cost and expected impact of a PrEP roll-out. We explored five issues: prioritisation strategies, adherence, behaviour change, toxicity, and resistance. Of 961 studies retrieved, 13 were included. Studies modelled populations (heterosexual couples, men who have sex with men, people who inject drugs) in generalised and concentrated epidemics from Southern Africa (including South Africa), Ukraine, USA, and Peru. PrEP was found to have the potential to be a cost-effective addition to HIV prevention programmes in specific settings. The extent of the impact of PrEP depended upon assumptions made concerning cost, epidemic context, programme coverage, prioritisation strategies, and individual-level adherence. Delivery of PrEP to key populations at highest risk of HIV exposure appears the most cost-effective strategy. Limitations of this review include the partial geographical coverage, our inability to perform a meta-analysis, and the paucity of information available exploring trade-offs between early treatment and PrEP. Our review identifies the main considerations to address in assessing cost-effectiveness analyses of a PrEP intervention--cost, epidemic context, individual adherence level, PrEP programme coverage, and prioritisation strategy. Cost-effectiveness studies indicating where resources can be applied for greatest impact are essential to guide resource allocation decisions; however, the results of such analyses must be considered within the context of the underlying assumptions made. Please see later in the article for the Editors' Summary.

  18. Suppression of Metastasis by Primary Tumor and Acceleration of Metastasis Following Primary Tumor Resection: A Natural Law?

    PubMed

    Hanin, Leonid; Rose, Jason

    2018-03-01

    We study metastatic cancer progression through an extremely general individual-patient mathematical model that is rooted in the contemporary understanding of the underlying biomedical processes yet is essentially free of specific biological assumptions of mechanistic nature. The model accounts for primary tumor growth and resection, shedding of metastases off the primary tumor and their selection, dormancy and growth in a given secondary site. However, functional parameters descriptive of these processes are assumed to be essentially arbitrary. In spite of such generality, the model allows for computing the distribution of site-specific sizes of detectable metastases in closed form. Under the assumption of exponential growth of metastases before and after primary tumor resection, we showed that, regardless of other model parameters and for every set of site-specific volumes of detected metastases, the model-based likelihood-maximizing scenario is always the same: complete suppression of metastatic growth before primary tumor resection followed by an abrupt growth acceleration after surgery. This scenario is commonly observed in clinical practice and is supported by a wealth of experimental and clinical studies conducted over the last 110 years. Furthermore, several biological mechanisms have been identified that could bring about suppression of metastasis by the primary tumor and accelerated vascularization and growth of metastases after primary tumor resection. To the best of our knowledge, the methodology for uncovering general biomedical principles developed in this work is new.

  19. Adaptive System Modeling for Spacecraft Simulation

    NASA Technical Reports Server (NTRS)

    Thomas, Justin

    2011-01-01

    This invention introduces a methodology and associated software tools for automatically learning spacecraft system models without any assumptions regarding system behavior. Data stream mining techniques were used to learn models for critical portions of the International Space Station (ISS) Electrical Power System (EPS). Evaluation on historical ISS telemetry data shows that adaptive system modeling reduces simulation error anywhere from 50 to 90 percent over existing approaches. The purpose of the methodology is to outline how someone can create accurate system models from sensor (telemetry) data. The purpose of the software is to support the methodology. The software provides analysis tools to design the adaptive models. The software also provides the algorithms to initially build system models and continuously update them from the latest streaming sensor data. The main strengths are as follows: Creates accurate spacecraft system models without in-depth system knowledge or any assumptions about system behavior. Automatically updates/calibrates system models using the latest streaming sensor data. Creates device specific models that capture the exact behavior of devices of the same type. Adapts to evolving systems. Can reduce computational complexity (faster simulations).

  20. Causal Mediation Analysis of Survival Outcome with Multiple Mediators.

    PubMed

    Huang, Yen-Tsung; Yang, Hwai-I

    2017-05-01

    Mediation analyses have been a popular approach to investigate the effect of an exposure on an outcome through a mediator. Mediation models with multiple mediators have been proposed for continuous and dichotomous outcomes. However, development of multimediator models for survival outcomes is still limited. We present methods for multimediator analyses using three survival models: Aalen additive hazard models, Cox proportional hazard models, and semiparametric probit models. Effects through mediators can be characterized by path-specific effects, for which definitions and identifiability assumptions are provided. We derive closed-form expressions for path-specific effects for the three models, which are intuitively interpreted using a causal diagram. Mediation analyses using Cox models under the rare-outcome assumption and Aalen additive hazard models consider effects on log hazard ratio and hazard difference, respectively; analyses using semiparametric probit models consider effects on difference in transformed survival time and survival probability. The three models were applied to a hepatitis study where we investigated effects of hepatitis C on liver cancer incidence mediated through baseline and/or follow-up hepatitis B viral load. The three methods show consistent results on respective effect scales, which suggest an adverse estimated effect of hepatitis C on liver cancer not mediated through hepatitis B, and a protective estimated effect mediated through the baseline (and possibly follow-up) of hepatitis B viral load. Causal mediation analyses of survival outcome with multiple mediators are developed for additive hazard and proportional hazard and probit models with utility demonstrated in a hepatitis study.

  1. Stability of Attachment Representations during Adolescence: The Influence of Ego-Identity Status.

    ERIC Educational Resources Information Center

    Zimmermann, Peter; Becker-Stoll, Fabienne

    2002-01-01

    Examines two core assumptions of attachment theory: internal working models of attachment should increase in stability during development, and attachment is related to the adaptive solution of stage-salient issues, in adolescence, specifically to identity formation. Results show secure attachment representation was positively associated with the…

  2. Comparative Robustness of Recent Methods for Analyzing Multivariate Repeated Measures Designs

    ERIC Educational Resources Information Center

    Seco, Guillermo Vallejo; Gras, Jaime Arnau; Garcia, Manuel Ato

    2007-01-01

    This study evaluated the robustness of two recent methods for analyzing multivariate repeated measures when the assumptions of covariance homogeneity and multivariate normality are violated. Specifically, the authors' work compares the performance of the modified Brown-Forsythe (MBF) procedure and the mixed-model procedure adjusted by the…

  3. A Zero- and K-Inflated Mixture Model for Health Questionnaire Data

    PubMed Central

    Finkelman, Matthew D.; Green, Jennifer Greif; Gruber, Michael J.; Zaslavsky, Alan M.

    2011-01-01

    In psychiatric assessment, Item Response Theory (IRT) is a popular tool to formalize the relation between the severity of a disorder and associated responses to questionnaire items. Practitioners of IRT sometimes make the assumption of normally distributed severities within a population; while convenient, this assumption is often violated when measuring psychiatric disorders. Specifically, there may be a sizable group of respondents whose answers place them at an extreme of the latent trait spectrum. In this article, a zero- and K-inflated mixture model is developed to account for the presence of such respondents. The model is fitted using an expectation-maximization (E-M) algorithm to estimate the percentage of the population at each end of the continuum, concurrently analyzing the remaining “graded component” via IRT. A method to perform factor analysis for only the graded component is introduced. In assessments of oppositional defiant disorder and conduct disorder, the zero- and K-inflated model exhibited better fit than the standard IRT model. PMID:21365673

  4. FATE 5: A natural attenuation calibration tool for groundwater fate and transport modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nevin, J.P.; Connor, J.A.; Newell, C.J.

    1997-12-31

    A new groundwater attenuation modeling tool (FATE 5) has been developed to assist users with determining site-specific natural attenuation rates for organic constituents dissolved in groundwater. FATE 5 is based on and represents an enhancement to the Domenico analytical groundwater transport model. These enhancements include use of an optimization routine to match results from the Domenico model to actual measured site concentrations, an extensive database of chemical property data, and calculation of an estimate of the length of time needed for a plume to reach steady state conditions. FATE 5 was developed in Microsoft{reg_sign} Excel and is controlled by meansmore » of a simple, user-friendly graphical interface. Using the Solver routine built into Excel, FATE 5 is able to calibrate the attenuation rate used by the Domenico model to match site-specific data. By calibrating the decay rate to site-specific measurements, FATE 5 can yield accurate predictions of long-term natural attenuation processes within a groundwater within a groundwater plume. In addition, FATE 5 includes a formulation of the transient Domenico solution used to help the user determine if the steady-state assumptions employed by the model are appropriate. The calibrated groundwater flow model can then be used either to (i) predict upper-bound constituent concentrations in groundwater, based on an observed source zone concentration, or (ii) back-calculate a lower-bound SSTL value, based on a user-specified exposure point concentration at the groundwater point of exposure (POE). This paper reviews the major elements of the FATE 5 model - and gives results for real-world applications. Key modeling assumptions and summary guidelines regarding calculation procedures and input parameter selection are also addressed.« less

  5. On the accuracy of personality judgment: a realistic approach.

    PubMed

    Funder, D C

    1995-10-01

    The "accuracy paradigm" for the study of personality judgment provides an important, new complement to the "error paradigm" that dominated this area of research for almost 2 decades. The present article introduces a specific approach within the accuracy paradigm called the Realistic Accuracy Model (RAM). RAM begins with the assumption that personality traits are real attributes of individuals. This assumption entails the use of a broad array of criteria for the evaluation of personality judgment and leads to a model that describes accuracy as a function of the availability, detection, and utilization of relevant behavioral cues. RAM provides a common explanation for basic moderators of accuracy, sheds light on how these moderators interact, and outlines a research agenda that includes the reintegration of the study of error with the study of accuracy.

  6. Sensitivity to imputation models and assumptions in receiver operating characteristic analysis with incomplete data

    PubMed Central

    Karakaya, Jale; Karabulut, Erdem; Yucel, Recai M.

    2015-01-01

    Modern statistical methods using incomplete data have been increasingly applied in a wide variety of substantive problems. Similarly, receiver operating characteristic (ROC) analysis, a method used in evaluating diagnostic tests or biomarkers in medical research, has also been increasingly popular problem in both its development and application. While missing-data methods have been applied in ROC analysis, the impact of model mis-specification and/or assumptions (e.g. missing at random) underlying the missing data has not been thoroughly studied. In this work, we study the performance of multiple imputation (MI) inference in ROC analysis. Particularly, we investigate parametric and non-parametric techniques for MI inference under common missingness mechanisms. Depending on the coherency of the imputation model with the underlying data generation mechanism, our results show that MI generally leads to well-calibrated inferences under ignorable missingness mechanisms. PMID:26379316

  7. Analysis of functional importance of binding sites in the Drosophila gap gene network model.

    PubMed

    Kozlov, Konstantin; Gursky, Vitaly V; Kulakovskiy, Ivan V; Dymova, Arina; Samsonova, Maria

    2015-01-01

    The statistical thermodynamics based approach provides a promising framework for construction of the genotype-phenotype map in many biological systems. Among important aspects of a good model connecting the DNA sequence information with that of a molecular phenotype (gene expression) is the selection of regulatory interactions and relevant transcription factor bindings sites. As the model may predict different levels of the functional importance of specific binding sites in different genomic and regulatory contexts, it is essential to formulate and study such models under different modeling assumptions. We elaborate a two-layer model for the Drosophila gap gene network and include in the model a combined set of transcription factor binding sites and concentration dependent regulatory interaction between gap genes hunchback and Kruppel. We show that the new variants of the model are more consistent in terms of gene expression predictions for various genetic constructs in comparison to previous work. We quantify the functional importance of binding sites by calculating their impact on gene expression in the model and calculate how these impacts correlate across all sites under different modeling assumptions. The assumption about the dual interaction between hb and Kr leads to the most consistent modeling results, but, on the other hand, may obscure existence of indirect interactions between binding sites in regulatory regions of distinct genes. The analysis confirms the previously formulated regulation concept of many weak binding sites working in concert. The model predicts a more or less uniform distribution of functionally important binding sites over the sets of experimentally characterized regulatory modules and other open chromatin domains.

  8. Improving inference for aerial surveys of bears: The importance of assumptions and the cost of unnecessary complexity.

    PubMed

    Schmidt, Joshua H; Wilson, Tammy L; Thompson, William L; Reynolds, Joel H

    2017-07-01

    Obtaining useful estimates of wildlife abundance or density requires thoughtful attention to potential sources of bias and precision, and it is widely understood that addressing incomplete detection is critical to appropriate inference. When the underlying assumptions of sampling approaches are violated, both increased bias and reduced precision of the population estimator may result. Bear ( Ursus spp.) populations can be difficult to sample and are often monitored using mark-recapture distance sampling (MRDS) methods, although obtaining adequate sample sizes can be cost prohibitive. With the goal of improving inference, we examined the underlying methodological assumptions and estimator efficiency of three datasets collected under an MRDS protocol designed specifically for bears. We analyzed these data using MRDS, conventional distance sampling (CDS), and open-distance sampling approaches to evaluate the apparent bias-precision tradeoff relative to the assumptions inherent under each approach. We also evaluated the incorporation of informative priors on detection parameters within a Bayesian context. We found that the CDS estimator had low apparent bias and was more efficient than the more complex MRDS estimator. When combined with informative priors on the detection process, precision was increased by >50% compared to the MRDS approach with little apparent bias. In addition, open-distance sampling models revealed a serious violation of the assumption that all bears were available to be sampled. Inference is directly related to the underlying assumptions of the survey design and the analytical tools employed. We show that for aerial surveys of bears, avoidance of unnecessary model complexity, use of prior information, and the application of open population models can be used to greatly improve estimator performance and simplify field protocols. Although we focused on distance sampling-based aerial surveys for bears, the general concepts we addressed apply to a variety of wildlife survey contexts.

  9. Log-gamma linear-mixed effects models for multiple outcomes with application to a longitudinal glaucoma study

    PubMed Central

    Zhang, Peng; Luo, Dandan; Li, Pengfei; Sharpsten, Lucie; Medeiros, Felipe A.

    2015-01-01

    Glaucoma is a progressive disease due to damage in the optic nerve with associated functional losses. Although the relationship between structural and functional progression in glaucoma is well established, there is disagreement on how this association evolves over time. In addressing this issue, we propose a new class of non-Gaussian linear-mixed models to estimate the correlations among subject-specific effects in multivariate longitudinal studies with a skewed distribution of random effects, to be used in a study of glaucoma. This class provides an efficient estimation of subject-specific effects by modeling the skewed random effects through the log-gamma distribution. It also provides more reliable estimates of the correlations between the random effects. To validate the log-gamma assumption against the usual normality assumption of the random effects, we propose a lack-of-fit test using the profile likelihood function of the shape parameter. We apply this method to data from a prospective observation study, the Diagnostic Innovations in Glaucoma Study, to present a statistically significant association between structural and functional change rates that leads to a better understanding of the progression of glaucoma over time. PMID:26075565

  10. A structured framework for assessing sensitivity to missing data assumptions in longitudinal clinical trials.

    PubMed

    Mallinckrodt, C H; Lin, Q; Molenberghs, M

    2013-01-01

    The objective of this research was to demonstrate a framework for drawing inference from sensitivity analyses of incomplete longitudinal clinical trial data via a re-analysis of data from a confirmatory clinical trial in depression. A likelihood-based approach that assumed missing at random (MAR) was the primary analysis. Robustness to departure from MAR was assessed by comparing the primary result to those from a series of analyses that employed varying missing not at random (MNAR) assumptions (selection models, pattern mixture models and shared parameter models) and to MAR methods that used inclusive models. The key sensitivity analysis used multiple imputation assuming that after dropout the trajectory of drug-treated patients was that of placebo treated patients with a similar outcome history (placebo multiple imputation). This result was used as the worst reasonable case to define the lower limit of plausible values for the treatment contrast. The endpoint contrast from the primary analysis was - 2.79 (p = .013). In placebo multiple imputation, the result was - 2.17. Results from the other sensitivity analyses ranged from - 2.21 to - 3.87 and were symmetrically distributed around the primary result. Hence, no clear evidence of bias from missing not at random data was found. In the worst reasonable case scenario, the treatment effect was 80% of the magnitude of the primary result. Therefore, it was concluded that a treatment effect existed. The structured sensitivity framework of using a worst reasonable case result based on a controlled imputation approach with transparent and debatable assumptions supplemented a series of plausible alternative models under varying assumptions was useful in this specific situation and holds promise as a generally useful framework. Copyright © 2012 John Wiley & Sons, Ltd.

  11. Adaptive Modeling of the International Space Station Electrical Power System

    NASA Technical Reports Server (NTRS)

    Thomas, Justin Ray

    2007-01-01

    Software simulations provide NASA engineers the ability to experiment with spacecraft systems in a computer-imitated environment. Engineers currently develop software models that encapsulate spacecraft system behavior. These models can be inaccurate due to invalid assumptions, erroneous operation, or system evolution. Increasing accuracy requires manual calibration and domain-specific knowledge. This thesis presents a method for automatically learning system models without any assumptions regarding system behavior. Data stream mining techniques are applied to learn models for critical portions of the International Space Station (ISS) Electrical Power System (EPS). We also explore a knowledge fusion approach that uses traditional engineered EPS models to supplement the learned models. We observed that these engineered EPS models provide useful background knowledge to reduce predictive error spikes when confronted with making predictions in situations that are quite different from the training scenarios used when learning the model. Evaluations using ISS sensor data and existing EPS models demonstrate the success of the adaptive approach. Our experimental results show that adaptive modeling provides reductions in model error anywhere from 80% to 96% over these existing models. Final discussions include impending use of adaptive modeling technology for ISS mission operations and the need for adaptive modeling in future NASA lunar and Martian exploration.

  12. [Modality specific systems of representation and processing of information. Superfluous images, useful representations, necessary evil or inevitable consequences of optimal stimulus processing].

    PubMed

    Zimmer, H D

    1993-01-01

    It is discussed what is underlying the assumption of modality-specific processing systems and representations. Starting from the information processing approach relevant aspects of mental representations and their physiological realizations are discussed. Then three different forms of modality-specific systems are distinguished: as stimulus specific processing, as specific informational formats, and as modular part systems. Parallel to that three kinds of analogue systems are differentiated: as holding an analogue-relation, as having a specific informational format and as a set of specific processing constraints. These different aspects of the assumption of modality-specific systems are demonstrated in the example of visual and spatial information processing. It is concluded that postulating information-specific systems is not a superfluous assumption, but it is necessary, and even more likely it is an inevitable consequence of an optimization of stimulus processing.

  13. Assessing the skill of hydrology models at simulating the water cycle in the HJ Andrews LTER: Assumptions, strengths and weaknesses

    EPA Science Inventory

    Simulated impacts of climate on hydrology can vary greatly as a function of the scale of the input data, model assumptions, and model structure. Four models are commonly used to simulate streamflow in model assumptions, and model structure. Four models are commonly used to simu...

  14. Systematic Reviews of Animal Models: Methodology versus Epistemology

    PubMed Central

    Greek, Ray; Menache, Andre

    2013-01-01

    Systematic reviews are currently favored methods of evaluating research in order to reach conclusions regarding medical practice. The need for such reviews is necessitated by the fact that no research is perfect and experts are prone to bias. By combining many studies that fulfill specific criteria, one hopes that the strengths can be multiplied and thus reliable conclusions attained. Potential flaws in this process include the assumptions that underlie the research under examination. If the assumptions, or axioms, upon which the research studies are based, are untenable either scientifically or logically, then the results must be highly suspect regardless of the otherwise high quality of the studies or the systematic reviews. We outline recent criticisms of animal-based research, namely that animal models are failing to predict human responses. It is this failure that is purportedly being corrected via systematic reviews. We then examine the assumption that animal models can predict human outcomes to perturbations such as disease or drugs, even under the best of circumstances. We examine the use of animal models in light of empirical evidence comparing human outcomes to those from animal models, complexity theory, and evolutionary biology. We conclude that even if legitimate criticisms of animal models were addressed, through standardization of protocols and systematic reviews, the animal model would still fail as a predictive modality for human response to drugs and disease. Therefore, systematic reviews and meta-analyses of animal-based research are poor tools for attempting to reach conclusions regarding human interventions. PMID:23372426

  15. Specification Improvement Through Analysis of Proof Structure (SITAPS): High Assurance Software Development

    DTIC Science & Technology

    2016-02-01

    proof in mathematics. For example, consider the proof of the Pythagorean Theorem illustrated at: http://www.cut-the-knot.org/ pythagoras / where 112...methods and tools have made significant progress in their ability to model software designs and prove correctness theorems about the systems modeled...assumption criticality” or “ theorem root set size” SITAPS detects potentially brittle verification cases. SITAPS provides tools and techniques that

  16. Mercury deposition in snow near an industrial emission source in the western U.S. and comparison to ISC3 model predictions

    USGS Publications Warehouse

    Abbott, M.L.; Susong, D.D.; Krabbenhoft, D.P.; Rood, A.S.

    2002-01-01

    Mercury (total and methyl) was evaluated in snow samples collected near a major mercury emission source on the Idaho National Engineering and Environmental Laboratory (INEEL) in southeastern Idaho and 160 km downwind in Teton Range in western Wyoming. The sampling was done to assess near-field (<12 km) deposition rates around the source, compare them to those measured in a relatively remote, pristine downwind location, and to use the measurements to develop improved, site-specific model input parameters for precipitation scavenging coefficient and the fraction of Hg emissions deposited locally. Measured snow water concentrations (ng L-1) were converted to deposition (ug m-2) using the sample location snow water equivalent. The deposition was then compared to that predicted using the ISC3 air dispersion/deposition model which was run with a range of particle and vapor scavenging coefficient input values. Accepted model statistical performance measures (fractional bias and normalized mean square error) were calculated for the different modeling runs, and the best model performance was selected. Measured concentrations close to the source (average = 5.3 ng L-1) were about twice those measured in the Teton Range (average = 2.7 ng L-1) which were within the expected range of values for remote background areas. For most of the sampling locations, the ISC3 model predicted within a factor of two of the observed deposition. The best modeling performance was obtained using a scavenging coefficient value for 0.25 ??m diameter particulate and the assumption that all of the mercury is reactive Hg(II) and subject to local deposition. A 0.1 ??m particle assumption provided conservative overprediction of the data, while a vapor assumption resulted in highly variable predictions. Partitioning a fraction of the Hg emissions to elemental Hg(0) (a U.S. EPA default assumption for combustion facility risk assessments) would have underpredicted the observed fallout.

  17. The Puerto Rican Prison Experience: A Multicultural Understanding of Values, Beliefs, and Attitudes.

    ERIC Educational Resources Information Center

    Rivera, Edil Torres; Wilbur, Michael P.; Roberts-Wilbur, Janice

    1998-01-01

    Counselors are challenged to use a nontraditional, multicultural approach with Puerto Rican inmates, to strive to understand their values, beliefs, experiences, and behaviors; and to question their own underlying assumptions and linear models of therapy. Five specific recommendations are made, and a comparison of beliefs and values is appended.…

  18. Adapting Instruction to Individual Learner Differences: A Research Paradigm for Computer-Based Instruction.

    ERIC Educational Resources Information Center

    Mills, Steven C.; Ragan, Tillman J.

    This paper examines a research paradigm that is particularly suited to experimentation-related computer-based instruction and integrated learning systems. The main assumption of the model is that one of the most powerful capabilities of computer-based instruction, and specifically of integrated learning systems, is the capacity to adapt…

  19. Of Mental Models, Assumptions and Heuristics: The Case of Acids and Acid Strength

    ERIC Educational Resources Information Center

    McClary, LaKeisha Michelle

    2010-01-01

    This study explored what cognitive resources (i.e., units of knowledge necessary to learn) first-semester organic chemistry students used to make decisions about acid strength and how those resources guided the prediction, explanation and justification of trends in acid strength. We were specifically interested in the identifying and…

  20. Estimating technical efficiency in the hospital sector with panel data: a comparison of parametric and non-parametric techniques.

    PubMed

    Siciliani, Luigi

    2006-01-01

    Policy makers are increasingly interested in developing performance indicators that measure hospital efficiency. These indicators may give the purchasers of health services an additional regulatory tool to contain health expenditure. Using panel data, this study compares different parametric (econometric) and non-parametric (linear programming) techniques for the measurement of a hospital's technical efficiency. This comparison was made using a sample of 17 Italian hospitals in the years 1996-9. Highest correlations are found in the efficiency scores between the non-parametric data envelopment analysis under the constant returns to scale assumption (DEA-CRS) and several parametric models. Correlation reduces markedly when using more flexible non-parametric specifications such as data envelopment analysis under the variable returns to scale assumption (DEA-VRS) and the free disposal hull (FDH) model. Correlation also generally reduces when moving from one output to two-output specifications. This analysis suggests that there is scope for developing performance indicators at hospital level using panel data, but it is important that extensive sensitivity analysis is carried out if purchasers wish to make use of these indicators in practice.

  1. On the assumption of vanishing temperature fluctuations at the wall for heat transfer modeling

    NASA Technical Reports Server (NTRS)

    Sommer, T. P.; So, R. M. C.; Zhang, H. S.

    1993-01-01

    Boundary conditions for fluctuating wall temperature are required for near-wall heat transfer modeling. However, their correct specifications for arbitrary thermal boundary conditions are not clear. The conventional approach is to assume zero fluctuating wall temperature or zero gradient for the temperature variance at the wall. These are idealized specifications and the latter condition could lead to an ill posed problem for fully-developed pipe and channel flows. In this paper, the validity and extent of the zero fluctuating wall temperature condition for heat transfer calculations is examined. The approach taken is to assume a Taylor expansion in the wall normal coordinate for the fluctuating temperature that is general enough to account for both zero and non-zero value at the wall. Turbulent conductivity is calculated from the temperature variance and its dissipation rate. Heat transfer calculations assuming both zero and non-zero fluctuating wall temperature reveal that the zero fluctuating wall temperature assumption is in general valid. The effects of non-zero fluctuating wall temperature are limited only to a very small region near the wall.

  2. Toward a Graded Psycholexical Space Mapping Model: Sublexical and Lexical Representations in Chinese Character Reading Development.

    PubMed

    Tong, Xiuli; McBride, Catherine

    2017-07-01

    Following a review of contemporary models of word-level processing for reading and their limitations, we propose a new hypothetical model of Chinese character reading, namely, the graded lexical space mapping model that characterizes how sublexical radicals and lexical information are involved in Chinese character reading development. The underlying assumption of this model is that Chinese character recognition is a process of competitive mappings of phonology, semantics, and orthography in both lexical and sublexical systems, operating as functions of statistical properties of print input based on the individual's specific level of reading. This model leads to several testable predictions concerning how the quasiregularity and continuity of Chinese-specific radicals are organized in memory for both child and adult readers at different developmental stages of reading.

  3. Pairing field methods to improve inference in wildlife surveys while accommodating detection covariance.

    PubMed

    Clare, John; McKinney, Shawn T; DePue, John E; Loftin, Cynthia S

    2017-10-01

    It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture-recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters. © 2017 by the Ecological Society of America.

  4. Role of large scale energy systems models in R and D planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lamontagne, J.

    1980-11-01

    Long-term energy policy deals with the problem of finite supplies of convenient energy sources becoming more costly as they are depleted. The development of alternative technologies to provide new sources of energy and extend the lives of current ones is an attractive option available to government. Thus, one aspect of long-term energy policy involves investment in R and D. The importance of the problems addressed by R and D to the future of society (especially with regard to energy) dictates adoption of a cogent approach to resource allocation and to the designation of priorities for R and D. It ismore » hoped that energy systems models when properly used can provide useful inputs to this process. The influence of model results on energy policy makers who are not knowledgable about flaws or uncertainties in the models, errors in assumptions in model inputs which can result in faulty forecasts, the overall usefulness of energy system models, and model limitations are discussed. It is suggested that the large scale energy systems models currently used for assessing a broad spectrum of policy issues need to be replaced with reasonably simple models capable of dealing with uncertainty in a straightforward manner, and their methodologies and the meaning of their results should be transparent, especially to those removed from the modeling process. Energy models should be clearly related to specific issues. Methodologies should be clearly related to specific decisions, and should allow adjustments to be easily made for alternative assumptions and for additional knowledge gained during the evolution of the energy system. (LCL)« less

  5. Multiphysics modeling of two-phase film boiling within porous corrosion deposits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Miaomiao, E-mail: mmjin@mit.edu; Short, Michael, E-mail: hereiam@mit.edu

    2016-07-01

    Porous corrosion deposits on nuclear fuel cladding, known as CRUD, can cause multiple operational problems in light water reactors (LWRs). CRUD can cause accelerated corrosion of the fuel cladding, increase radiation fields and hence greater exposure risk to plant workers once activated, and induce a downward axial power shift causing an imbalance in core power distribution. In order to facilitate a better understanding of CRUD's effects, such as localized high cladding surface temperatures related to accelerated corrosion rates, we describe an improved, fully-coupled, multiphysics model to simulate heat transfer, chemical reactions and transport, and two-phase fluid flow within these deposits.more » Our new model features a reformed assumption of 2D, two-phase film boiling within the CRUD, correcting earlier models' assumptions of single-phase coolant flow with wick boiling under high heat fluxes. This model helps to better explain observed experimental values of the effective CRUD thermal conductivity. Finally, we propose a more complete set of boiling regimes, or a more detailed mechanism, to explain recent CRUD deposition experiments by suggesting the new concept of double dryout specifically in thick porous media with boiling chimneys. - Highlights: • A two-phase model of CRUD's effects on fuel cladding is developed and improved. • This model eliminates the formerly erroneous assumption of wick boiling. • Higher fuel cladding temperatures are predicted when accounting for two-phase flow. • Double-peaks in thermal conductivity vs. heat flux in experiments are explained. • A “double dryout” mechanism in CRUD is proposed based on the model and experiments.« less

  6. Biology meets physics: Reductionism and multi-scale modeling of morphogenesis.

    PubMed

    Green, Sara; Batterman, Robert

    2017-02-01

    A common reductionist assumption is that macro-scale behaviors can be described "bottom-up" if only sufficient details about lower-scale processes are available. The view that an "ideal" or "fundamental" physics would be sufficient to explain all macro-scale phenomena has been met with criticism from philosophers of biology. Specifically, scholars have pointed to the impossibility of deducing biological explanations from physical ones, and to the irreducible nature of distinctively biological processes such as gene regulation and evolution. This paper takes a step back in asking whether bottom-up modeling is feasible even when modeling simple physical systems across scales. By comparing examples of multi-scale modeling in physics and biology, we argue that the "tyranny of scales" problem presents a challenge to reductive explanations in both physics and biology. The problem refers to the scale-dependency of physical and biological behaviors that forces researchers to combine different models relying on different scale-specific mathematical strategies and boundary conditions. Analyzing the ways in which different models are combined in multi-scale modeling also has implications for the relation between physics and biology. Contrary to the assumption that physical science approaches provide reductive explanations in biology, we exemplify how inputs from physics often reveal the importance of macro-scale models and explanations. We illustrate this through an examination of the role of biomechanical modeling in developmental biology. In such contexts, the relation between models at different scales and from different disciplines is neither reductive nor completely autonomous, but interdependent. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Three dimensional thermal pollution models. Volume 1: Review of mathematical formulations. [waste heat discharge from power plants and effects on ecosystems

    NASA Technical Reports Server (NTRS)

    Lee, S. S.; Sengupta, S.

    1978-01-01

    A mathematical model package for thermal pollution analyses and prediction is presented. These models, intended as user's manuals, are three dimensional and time dependent using the primitive equation approach. Although they have sufficient generality for application at sites with diverse topographical features; they also present specific instructions regarding data preparation for program execution and sample problems. The mathematical formulation of these models is presented including assumptions, approximations, governing equations, boundary and initial conditions, numerical method of solution, and same results.

  8. Dendrite and Axon Specific Geometrical Transformation in Neurite Development

    PubMed Central

    Mironov, Vasily I.; Semyanov, Alexey V.; Kazantsev, Victor B.

    2016-01-01

    We propose a model of neurite growth to explain the differences in dendrite and axon specific neurite development. The model implements basic molecular kinetics, e.g., building protein synthesis and transport to the growth cone, and includes explicit dependence of the building kinetics on the geometry of the neurite. The basic assumption was that the radius of the neurite decreases with length. We found that the neurite dynamics crucially depended on the relationship between the rate of active transport and the rate of morphological changes. If these rates were in the balance, then the neurite displayed axon specific development with a constant elongation speed. For dendrite specific growth, the maximal length was rapidly saturated by degradation of building protein structures or limited by proximal part expansion reaching the characteristic cell size. PMID:26858635

  9. Sensitivity of Earthquake Loss Estimates to Source Modeling Assumptions and Uncertainty

    USGS Publications Warehouse

    Reasenberg, Paul A.; Shostak, Nan; Terwilliger, Sharon

    2006-01-01

    Introduction: This report explores how uncertainty in an earthquake source model may affect estimates of earthquake economic loss. Specifically, it focuses on the earthquake source model for the San Francisco Bay region (SFBR) created by the Working Group on California Earthquake Probabilities. The loss calculations are made using HAZUS-MH, a publicly available computer program developed by the Federal Emergency Management Agency (FEMA) for calculating future losses from earthquakes, floods and hurricanes within the United States. The database built into HAZUS-MH includes a detailed building inventory, population data, data on transportation corridors, bridges, utility lifelines, etc. Earthquake hazard in the loss calculations is based upon expected (median value) ground motion maps called ShakeMaps calculated for the scenario earthquake sources defined in WGCEP. The study considers the effect of relaxing certain assumptions in the WG02 model, and explores the effect of hypothetical reductions in epistemic uncertainty in parts of the model. For example, it addresses questions such as what would happen to the calculated loss distribution if the uncertainty in slip rate in the WG02 model were reduced (say, by obtaining additional geologic data)? What would happen if the geometry or amount of aseismic slip (creep) on the region's faults were better known? And what would be the effect on the calculated loss distribution if the time-dependent earthquake probability were better constrained, either by eliminating certain probability models or by better constraining the inherent randomness in earthquake recurrence? The study does not consider the effect of reducing uncertainty in the hazard introduced through models of attenuation and local site characteristics, although these may have a comparable or greater effect than does source-related uncertainty. Nor does it consider sources of uncertainty in the building inventory, building fragility curves, and other assumptions adopted in the loss calculations. This is a sensitivity study aimed at future regional earthquake source modelers, so that they may be informed of the effects on loss introduced by modeling assumptions and epistemic uncertainty in the WG02 earthquake source model.

  10. Disease Extinction Versus Persistence in Discrete-Time Epidemic Models.

    PubMed

    van den Driessche, P; Yakubu, Abdul-Aziz

    2018-04-12

    We focus on discrete-time infectious disease models in populations that are governed by constant, geometric, Beverton-Holt or Ricker demographic equations, and give a method for computing the basic reproduction number, [Formula: see text]. When [Formula: see text] and the demographic population dynamics are asymptotically constant or under geometric growth (non-oscillatory), we prove global asymptotic stability of the disease-free equilibrium of the disease models. Under the same demographic assumption, when [Formula: see text], we prove uniform persistence of the disease. We apply our theoretical results to specific discrete-time epidemic models that are formulated for SEIR infections, cholera in humans and anthrax in animals. Our simulations show that a unique endemic equilibrium of each of the three specific disease models is asymptotically stable whenever [Formula: see text].

  11. Neurocognitive Approaches to Developmental Disorders of Numerical and Mathematical Cognition: The Perils of Neglecting the Role of Development

    ERIC Educational Resources Information Center

    Ansari, Daniel

    2010-01-01

    The present paper provides a critical overview of how adult neuropsychological models have been applied to the study of the atypical development of numerical cognition. Specifically, the following three assumptions are challenged: 1. Profiles of strength and weaknesses do not change over developmental time. 2. Similar neuronal structures are…

  12. Model specification and bootstrapping for multiply imputed data: An application to count models for the frequency of alcohol use

    PubMed Central

    Comulada, W. Scott

    2015-01-01

    Stata’s mi commands provide powerful tools to conduct multiple imputation in the presence of ignorable missing data. In this article, I present Stata code to extend the capabilities of the mi commands to address two areas of statistical inference where results are not easily aggregated across imputed datasets. First, mi commands are restricted to covariate selection. I show how to address model fit to correctly specify a model. Second, the mi commands readily aggregate model-based standard errors. I show how standard errors can be bootstrapped for situations where model assumptions may not be met. I illustrate model specification and bootstrapping on frequency counts for the number of times that alcohol was consumed in data with missing observations from a behavioral intervention. PMID:26973439

  13. A Novel Information-Theoretic Approach for Variable Clustering and Predictive Modeling Using Dirichlet Process Mixtures

    PubMed Central

    Chen, Yun; Yang, Hui

    2016-01-01

    In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering. PMID:27966581

  14. Testing model parameters for wave-induced dune erosion using observations from Hurricane Sandy

    NASA Astrophysics Data System (ADS)

    Overbeck, J. R.; Long, J. W.; Stockdon, H. F.

    2017-01-01

    Models of dune erosion depend on a set of assumptions that dictate the predicted evolution of dunes throughout the duration of a storm. Lidar observations made before and after Hurricane Sandy at over 800 profiles with diverse dune elevations, widths, and volumes are used to quantify specific dune erosion model parameters including the dune face slope, which controls dune avalanching, and the trajectory of the dune toe, which controls dune migration. Wave-impact models of dune erosion assume a vertical dune face and erosion of the dune toe along the foreshore beach slope. Observations presented here show that these assumptions are not always valid and require additional testing if these models are to be used to predict coastal vulnerability for decision-making purposes. Observed dune face slopes steepened by 43% yet did not become vertical faces, and only 50% of the dunes evolved along a trajectory similar to the foreshore beach slope. Observations also indicate that dune crests were lowered during dune erosion. Moreover, analysis showed a correspondence between dune lowering and narrower beaches, smaller dune volumes, and/or longer wave impact.

  15. A Novel Information-Theoretic Approach for Variable Clustering and Predictive Modeling Using Dirichlet Process Mixtures.

    PubMed

    Chen, Yun; Yang, Hui

    2016-12-14

    In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering.

  16. Testing model parameters for wave‐induced dune erosion using observations from Hurricane Sandy

    USGS Publications Warehouse

    Overbeck, Jacquelyn R.; Long, Joseph W.; Stockdon, Hilary F.

    2017-01-01

    Models of dune erosion depend on a set of assumptions that dictate the predicted evolution of dunes throughout the duration of a storm. Lidar observations made before and after Hurricane Sandy at over 800 profiles with diverse dune elevations, widths, and volumes are used to quantify specific dune erosion model parameters including the dune face slope, which controls dune avalanching, and the trajectory of the dune toe, which controls dune migration. Wave‐impact models of dune erosion assume a vertical dune face and erosion of the dune toe along the foreshore beach slope. Observations presented here show that these assumptions are not always valid and require additional testing if these models are to be used to predict coastal vulnerability for decision‐making purposes. Observed dune face slopes steepened by 43% yet did not become vertical faces, and only 50% of the dunes evolved along a trajectory similar to the foreshore beach slope. Observations also indicate that dune crests were lowered during dune erosion. Moreover, analysis showed a correspondence between dune lowering and narrower beaches, smaller dune volumes, and/or longer wave impact.

  17. EnKF with closed-eye period - bridging intermittent model structural errors in soil hydrology

    NASA Astrophysics Data System (ADS)

    Bauser, Hannes H.; Jaumann, Stefan; Berg, Daniel; Roth, Kurt

    2017-04-01

    The representation of soil water movement exposes uncertainties in all model components, namely dynamics, forcing, subscale physics and the state itself. Especially model structural errors in the description of the dynamics are difficult to represent and can lead to an inconsistent estimation of the other components. We address the challenge of a consistent aggregation of information for a manageable specific hydraulic situation: a 1D soil profile with TDR-measured water contents during a time period of less than 2 months. We assess the uncertainties for this situation and detect initial condition, soil hydraulic parameters, small-scale heterogeneity, upper boundary condition, and (during rain events) the local equilibrium assumption by the Richards equation as the most important ones. We employ an iterative Ensemble Kalman Filter (EnKF) with an augmented state. Based on a single rain event, we are able to reduce all uncertainties directly, except for the intermittent violation of the local equilibrium assumption. We detect these times by analyzing the temporal evolution of estimated parameters. By introducing a closed-eye period - during which we do not estimate parameters, but only guide the state based on measurements - we can bridge these times. The introduced closed-eye period ensured constant parameters, suggesting that they resemble the believed true material properties. The closed-eye period improves predictions during periods when the local equilibrium assumption is met, but consequently worsens predictions when the assumption is violated. Such a prediction requires a description of the dynamics during local non-equilibrium phases, which remains an open challenge.

  18. Uncertainty quantification methodologies development for stress corrosion cracking of canister welds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dingreville, Remi Philippe Michel; Bryan, Charles R.

    2016-09-30

    This letter report presents a probabilistic performance assessment model to evaluate the probability of canister failure (through-wall penetration) by SCC. The model first assesses whether environmental conditions for SCC – the presence of an aqueous film – are present at canister weld locations (where tensile stresses are likely to occur) on the canister surface. Geometry-specific storage system thermal models and weather data sets representative of U.S. spent nuclear fuel (SNF) storage sites are implemented to evaluate location-specific canister surface temperature and relative humidity (RH). As the canister cools and aqueous conditions become possible, the occurrence of corrosion is evaluated. Corrosionmore » is modeled as a two-step process: first, pitting is initiated, and the extent and depth of pitting is a function of the chloride surface load and the environmental conditions (temperature and RH). Second, as corrosion penetration increases, the pit eventually transitions to a SCC crack, with crack initiation becoming more likely with increasing pit depth. Once pits convert to cracks, a crack growth model is implemented. The SCC growth model includes rate dependencies on both temperature and crack tip stress intensity factor, and crack growth only occurs in time steps when aqueous conditions are predicted. The model suggests that SCC is likely to occur over potential SNF interim storage intervals; however, this result is based on many modeling assumptions. Sensitivity analyses provide information on the model assumptions and parameter values that have the greatest impact on predicted storage canister performance, and provide guidance for further research to reduce uncertainties.« less

  19. Bayesian soft X-ray tomography using non-stationary Gaussian Processes

    NASA Astrophysics Data System (ADS)

    Li, Dong; Svensson, J.; Thomsen, H.; Medina, F.; Werner, A.; Wolf, R.

    2013-08-01

    In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.

  20. Bayesian soft X-ray tomography using non-stationary Gaussian Processes.

    PubMed

    Li, Dong; Svensson, J; Thomsen, H; Medina, F; Werner, A; Wolf, R

    2013-08-01

    In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.

  1. Modeling Rabbit Responses to Single and Multiple Aerosol ...

    EPA Pesticide Factsheets

    Journal Article Survival models are developed here to predict response and time-to-response for mortality in rabbits following exposures to single or multiple aerosol doses of Bacillus anthracis spores. Hazard function models were developed for a multiple dose dataset to predict the probability of death through specifying dose-response functions and the time between exposure and the time-to-death (TTD). Among the models developed, the best-fitting survival model (baseline model) has an exponential dose-response model with a Weibull TTD distribution. Alternative models assessed employ different underlying dose-response functions and use the assumption that, in a multiple dose scenario, earlier doses affect the hazard functions of each subsequent dose. In addition, published mechanistic models are analyzed and compared with models developed in this paper. None of the alternative models that were assessed provided a statistically significant improvement in fit over the baseline model. The general approach utilizes simple empirical data analysis to develop parsimonious models with limited reliance on mechanistic assumptions. The baseline model predicts TTDs consistent with reported results from three independent high-dose rabbit datasets. More accurate survival models depend upon future development of dose-response datasets specifically designed to assess potential multiple dose effects on response and time-to-response. The process used in this paper to dev

  2. Mathematical Modeling: Are Prior Experiences Important?

    ERIC Educational Resources Information Center

    Czocher, Jennifer A.; Moss, Diana L.

    2017-01-01

    Why are math modeling problems the source of such frustration for students and teachers? The conceptual understanding that students have when engaging with a math modeling problem varies greatly. They need opportunities to make their own assumptions and design the mathematics to fit these assumptions (CCSSI 2010). Making these assumptions is part…

  3. Assumptions to the Annual Energy Outlook

    EIA Publications

    2017-01-01

    This report presents the major assumptions of the National Energy Modeling System (NEMS) used to generate the projections in the Annual Energy Outlook, including general features of the model structure, assumptions concerning energy markets, and the key input data and parameters that are the most significant in formulating the model results.

  4. The lateral variation of P n velocity gradient under Eurasia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiaoning

    We report that mantle lid P wave velocity gradient, or P n velocity gradient, reflects the depth and lateral variations of thermal and rheological state of the uppermost mantle. Mapping the P n velocity gradient and its lateral variation helps us gain insight into the temperature, composition, and dynamics of the uppermost mantle. In addition, because P n velocity gradient has profound influence on P n propagation behavior, an accurate mapping of P n velocity gradient also improves the modeling and prediction of P n travel times and amplitudes. In this study, I used measured P n travel times tomore » derive path-specific P n velocity gradients. I then inverted these velocity gradients for two-dimensional (2-D) P n velocity-gradient models for Eurasia based on the assumption that a path-specific Pn velocity gradient is the mean of laterally varying P n velocity gradients along the P n path. Result from a Monte Carlo simulation indicates that the assumption is appropriate. The 2-D velocity-gradient models show that most of Eurasia has positive velocity gradients. High velocity gradients exist mainly in tectonically active regions. Most tectonically stable regions show low and more uniform velocity gradients. In conclusion, strong velocity-gradient variations occur largely along convergent plate boundaries, particularly under overriding plates.« less

  5. The lateral variation of P n velocity gradient under Eurasia

    DOE PAGES

    Yang, Xiaoning

    2017-05-03

    We report that mantle lid P wave velocity gradient, or P n velocity gradient, reflects the depth and lateral variations of thermal and rheological state of the uppermost mantle. Mapping the P n velocity gradient and its lateral variation helps us gain insight into the temperature, composition, and dynamics of the uppermost mantle. In addition, because P n velocity gradient has profound influence on P n propagation behavior, an accurate mapping of P n velocity gradient also improves the modeling and prediction of P n travel times and amplitudes. In this study, I used measured P n travel times tomore » derive path-specific P n velocity gradients. I then inverted these velocity gradients for two-dimensional (2-D) P n velocity-gradient models for Eurasia based on the assumption that a path-specific Pn velocity gradient is the mean of laterally varying P n velocity gradients along the P n path. Result from a Monte Carlo simulation indicates that the assumption is appropriate. The 2-D velocity-gradient models show that most of Eurasia has positive velocity gradients. High velocity gradients exist mainly in tectonically active regions. Most tectonically stable regions show low and more uniform velocity gradients. In conclusion, strong velocity-gradient variations occur largely along convergent plate boundaries, particularly under overriding plates.« less

  6. Does Specification Matter? Experiments with Simple Multiregional Probabilistic Population Projections

    PubMed Central

    Raymer, James; Abel, Guy J.; Rogers, Andrei

    2012-01-01

    Population projection models that introduce uncertainty are a growing subset of projection models in general. In this paper, we focus on the importance of decisions made with regard to the model specifications adopted. We compare the forecasts and prediction intervals associated with four simple regional population projection models: an overall growth rate model, a component model with net migration, a component model with in-migration and out-migration rates, and a multiregional model with destination-specific out-migration rates. Vector autoregressive models are used to forecast future rates of growth, birth, death, net migration, in-migration and out-migration, and destination-specific out-migration for the North, Midlands and South regions in England. They are also used to forecast different international migration measures. The base data represent a time series of annual data provided by the Office for National Statistics from 1976 to 2008. The results illustrate how both the forecasted subpopulation totals and the corresponding prediction intervals differ for the multiregional model in comparison to other simpler models, as well as for different assumptions about international migration. The paper ends end with a discussion of our results and possible directions for future research. PMID:23236221

  7. Bayesian Fundamentalism or Enlightenment? On the explanatory status and theoretical contributions of Bayesian models of cognition.

    PubMed

    Jones, Matt; Love, Bradley C

    2011-08-01

    The prominence of Bayesian modeling of cognition has increased recently largely because of mathematical advances in specifying and deriving predictions from complex probabilistic models. Much of this research aims to demonstrate that cognitive behavior can be explained from rational principles alone, without recourse to psychological or neurological processes and representations. We note commonalities between this rational approach and other movements in psychology - namely, Behaviorism and evolutionary psychology - that set aside mechanistic explanations or make use of optimality assumptions. Through these comparisons, we identify a number of challenges that limit the rational program's potential contribution to psychological theory. Specifically, rational Bayesian models are significantly unconstrained, both because they are uninformed by a wide range of process-level data and because their assumptions about the environment are generally not grounded in empirical measurement. The psychological implications of most Bayesian models are also unclear. Bayesian inference itself is conceptually trivial, but strong assumptions are often embedded in the hypothesis sets and the approximation algorithms used to derive model predictions, without a clear delineation between psychological commitments and implementational details. Comparing multiple Bayesian models of the same task is rare, as is the realization that many Bayesian models recapitulate existing (mechanistic level) theories. Despite the expressive power of current Bayesian models, we argue they must be developed in conjunction with mechanistic considerations to offer substantive explanations of cognition. We lay out several means for such an integration, which take into account the representations on which Bayesian inference operates, as well as the algorithms and heuristics that carry it out. We argue this unification will better facilitate lasting contributions to psychological theory, avoiding the pitfalls that have plagued previous theoretical movements.

  8. Model Considerations for Memory-based Automatic Music Transcription

    NASA Astrophysics Data System (ADS)

    Albrecht, Štěpán; Šmídl, Václav

    2009-12-01

    The problem of automatic music description is considered. The recorded music is modeled as a superposition of known sounds from a library weighted by unknown weights. Similar observation models are commonly used in statistics and machine learning. Many methods for estimation of the weights are available. These methods differ in the assumptions imposed on the weights. In Bayesian paradigm, these assumptions are typically expressed in the form of prior probability density function (pdf) on the weights. In this paper, commonly used assumptions about music signal are summarized and complemented by a new assumption. These assumptions are translated into pdfs and combined into a single prior density using combination of pdfs. Validity of the model is tested in simulation using synthetic data.

  9. Development of state and transition model assumptions used in National Forest Plan revision

    Treesearch

    Eric B. Henderson

    2008-01-01

    State and transition models are being utilized in forest management analysis processes to evaluate assumptions about disturbances and succession. These models assume valid information about seral class successional pathways and timing. The Forest Vegetation Simulator (FVS) was used to evaluate seral class succession assumptions for the Hiawatha National Forest in...

  10. Assumptions to the annual energy outlook 1999 : with projections to 2020

    DOT National Transportation Integrated Search

    1998-12-16

    This paper presents the major assumptions of the National Energy Modeling System (NEMS) used to : generate the projections in the Annual Energy Outlook 19991 (AEO99), including general features of : the model structure, assumptions concerning energy ...

  11. Assumptions to the annual energy outlook 2000 : with projections to 2020

    DOT National Transportation Integrated Search

    2000-01-01

    This paper presents the major assumptions of the National Energy Modeling System (NEMS) used to : generate the projections in the Annual Energy Outlook 20001 (AEO2000), including general features of : the model structure, assumptions concerning energ...

  12. Assumptions to the annual energy outlook 2001 : with projections to 2020

    DOT National Transportation Integrated Search

    2000-12-01

    This report presents the major assumptions of the National Energy Modeling System (NEMS) used to : generate the projections in the Annual Energy Outlook 20011 (AEO2001), including general features of : the model structure, assumptions concerning ener...

  13. Assumptions for the annual energy outlook 2003 : with projections to 2025

    DOT National Transportation Integrated Search

    2003-01-01

    This report presents the major assumptions of the National Energy Modeling System (NEMS) used to : generate the projections in the Annual Energy Outlook 20031 (AEO2003), including general features of : the model structure, assumptions concerning ener...

  14. Quality Reporting of Multivariable Regression Models in Observational Studies: Review of a Representative Sample of Articles Published in Biomedical Journals.

    PubMed

    Real, Jordi; Forné, Carles; Roso-Llorach, Albert; Martínez-Sánchez, Jose M

    2016-05-01

    Controlling for confounders is a crucial step in analytical observational studies, and multivariable models are widely used as statistical adjustment techniques. However, the validation of the assumptions of the multivariable regression models (MRMs) should be made clear in scientific reporting. The objective of this study is to review the quality of statistical reporting of the most commonly used MRMs (logistic, linear, and Cox regression) that were applied in analytical observational studies published between 2003 and 2014 by journals indexed in MEDLINE.Review of a representative sample of articles indexed in MEDLINE (n = 428) with observational design and use of MRMs (logistic, linear, and Cox regression). We assessed the quality of reporting about: model assumptions and goodness-of-fit, interactions, sensitivity analysis, crude and adjusted effect estimate, and specification of more than 1 adjusted model.The tests of underlying assumptions or goodness-of-fit of the MRMs used were described in 26.2% (95% CI: 22.0-30.3) of the articles and 18.5% (95% CI: 14.8-22.1) reported the interaction analysis. Reporting of all items assessed was higher in articles published in journals with a higher impact factor.A low percentage of articles indexed in MEDLINE that used multivariable techniques provided information demonstrating rigorous application of the model selected as an adjustment method. Given the importance of these methods to the final results and conclusions of observational studies, greater rigor is required in reporting the use of MRMs in the scientific literature.

  15. Multiscale Modeling of Gene-Behavior Associations in an Artificial Neural Network Model of Cognitive Development.

    PubMed

    Thomas, Michael S C; Forrester, Neil A; Ronald, Angelica

    2016-01-01

    In the multidisciplinary field of developmental cognitive neuroscience, statistical associations between levels of description play an increasingly important role. One example of such associations is the observation of correlations between relatively common gene variants and individual differences in behavior. It is perhaps surprising that such associations can be detected despite the remoteness of these levels of description, and the fact that behavior is the outcome of an extended developmental process involving interaction of the whole organism with a variable environment. Given that they have been detected, how do such associations inform cognitive-level theories? To investigate this question, we employed a multiscale computational model of development, using a sample domain drawn from the field of language acquisition. The model comprised an artificial neural network model of past-tense acquisition trained using the backpropagation learning algorithm, extended to incorporate population modeling and genetic algorithms. It included five levels of description-four internal: genetic, network, neurocomputation, behavior; and one external: environment. Since the mechanistic assumptions of the model were known and its operation was relatively transparent, we could evaluate whether cross-level associations gave an accurate picture of causal processes. We established that associations could be detected between artificial genes and behavioral variation, even under polygenic assumptions of a many-to-one relationship between genes and neurocomputational parameters, and when an experience-dependent developmental process interceded between the action of genes and the emergence of behavior. We evaluated these associations with respect to their specificity (to different behaviors, to function vs. structure), to their developmental stability, and to their replicability, as well as considering issues of missing heritability and gene-environment interactions. We argue that gene-behavior associations can inform cognitive theory with respect to effect size, specificity, and timing. The model demonstrates a means by which researchers can undertake multiscale modeling with respect to cognition and develop highly specific and complex hypotheses across multiple levels of description. Copyright © 2015 Cognitive Science Society, Inc.

  16. Models projecting the fate of fish populations under climate change need to be based on valid physiological mechanisms.

    PubMed

    Lefevre, Sjannie; McKenzie, David J; Nilsson, Göran E

    2017-09-01

    Some recent modelling papers projecting smaller fish sizes and catches in a warmer future are based on erroneous assumptions regarding (i) the scaling of gills with body mass and (ii) the energetic cost of 'maintenance'. Assumption (i) posits that insurmountable geometric constraints prevent respiratory surface areas from growing as fast as body volume. It is argued that these constraints explain allometric scaling of energy metabolism, whereby larger fishes have relatively lower mass-specific metabolic rates. Assumption (ii) concludes that when fishes reach a certain size, basal oxygen demands will not be met, because of assumption (i). We here demonstrate unequivocally, by applying accepted physiological principles with reference to the existing literature, that these assumptions are not valid. Gills are folded surfaces, where the scaling of surface area to volume is not constrained by spherical geometry. The gill surface area can, in fact, increase linearly in proportion to gill volume and body mass. We cite the large body of evidence demonstrating that respiratory surface areas in fishes reflect metabolic needs, not vice versa, which explains the large interspecific variation in scaling of gill surface areas. Finally, we point out that future studies basing their predictions on models should incorporate factors for scaling of metabolic rate and for temperature effects on metabolism, which agree with measured values, and should account for interspecific variation in scaling and temperature effects. It is possible that some fishes will become smaller in the future, but to make reliable predictions the underlying mechanisms need to be identified and sought elsewhere than in geometric constraints on gill surface area. Furthermore, to ensure that useful information is conveyed to the public and policymakers about the possible effects of climate change, it is necessary to improve communication and congruity between fish physiologists and fisheries scientists. © 2017 John Wiley & Sons Ltd.

  17. The composite dynamic method as evidence for age-specific waterfowl mortality

    USGS Publications Warehouse

    Burnham, Kenneth P.; Anderson, David R.

    1979-01-01

    For the past 25 years estimation of mortality rates for waterfowl has been based almost entirely on the composite dynamic life table. We examined the specific assumptions for this method and derived a valid goodness of fit test. We performed this test on 45 data sets representing a cross section of banded sampled for various waterfowl species, geographic areas, banding periods, and age/sex classes. We found that: (1) the composite dynamic method was rejected (P <0.001) in 37 of the 45 data sets (in fact, 29 were rejected at P <0.00001) and (2) recovery and harvest rates are year-specific (a critical violation of the necessary assumptions). We conclude that the restrictive assumptions required for the composite dynamic method to produce valid estimates of mortality rates are not met in waterfowl data. Also we demonstrate that even when the required assumptions are met, the method produces very biased estimates of age-specific mortality rates. We believe the composite dynamic method should not be used in the analysis of waterfowl banding data. Furthermore, the composite dynamic method does not provide valid evidence for age-specific mortality rates in waterfowl.

  18. A close examination of double filtering with fold change and t test in microarray analysis

    PubMed Central

    2009-01-01

    Background Many researchers use the double filtering procedure with fold change and t test to identify differentially expressed genes, in the hope that the double filtering will provide extra confidence in the results. Due to its simplicity, the double filtering procedure has been popular with applied researchers despite the development of more sophisticated methods. Results This paper, for the first time to our knowledge, provides theoretical insight on the drawback of the double filtering procedure. We show that fold change assumes all genes to have a common variance while t statistic assumes gene-specific variances. The two statistics are based on contradicting assumptions. Under the assumption that gene variances arise from a mixture of a common variance and gene-specific variances, we develop the theoretically most powerful likelihood ratio test statistic. We further demonstrate that the posterior inference based on a Bayesian mixture model and the widely used significance analysis of microarrays (SAM) statistic are better approximations to the likelihood ratio test than the double filtering procedure. Conclusion We demonstrate through hypothesis testing theory, simulation studies and real data examples, that well constructed shrinkage testing methods, which can be united under the mixture gene variance assumption, can considerably outperform the double filtering procedure. PMID:19995439

  19. A stochastic model for the normal tissue complication probability (NTCP) and applicationss.

    PubMed

    Stocks, Theresa; Hillen, Thomas; Gong, Jiafen; Burger, Martin

    2017-12-11

    The normal tissue complication probability (NTCP) is a measure for the estimated side effects of a given radiation treatment schedule. Here we use a stochastic logistic birth-death process to define an organ-specific and patient-specific NTCP. We emphasize an asymptotic simplification which relates the NTCP to the solution of a logistic differential equation. This framework is based on simple modelling assumptions and it prepares a framework for the use of the NTCP model in clinical practice. As example, we consider side effects of prostate cancer brachytherapy such as increase in urinal frequency, urinal retention and acute rectal dysfunction. © The authors 2016. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

  20. Combustion Technology for Incinerating Wastes from Air Force Industrial Processes.

    DTIC Science & Technology

    1984-02-01

    The assumption of equilibrium between environmental compartments. * The statistical extrapolations yielding "safe" doses of various constituents...would be contacted to identify the assumptions and data requirements needed to design, construct and implement the model. The model’s primary objective...Recovery Planning Model (RRPLAN) is described. This section of the paper summarizes the model’s assumptions , major components and modes of operation

  1. 77 FR 74421 - Approval and Promulgation of Air Quality Implementation Plans for PM2.5

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-14

    ... calculation of future year PM 2.5 design values using the SMAT assumptions contained in the modeled guidance\\4... components. Future PM 2.5 design values at specified monitoring sites were estimated by adding the future... nonattainment area, all future site-specific PM 2.5 design values were below the concentration specified in the...

  2. Sex-biased dispersal, kin selection and the evolution of sexual conflict.

    PubMed

    Faria, Gonçalo S; Varela, Susana A M; Gardner, Andy

    2015-10-01

    There is growing interest in resolving the curious disconnect between the fields of kin selection and sexual selection. Rankin's (2011, J. Evol. Biol. 24, 71-81) theoretical study of the impact of kin selection on the evolution of sexual conflict in viscous populations has been particularly valuable in stimulating empirical research in this area. An important goal of that study was to understand the impact of sex-specific rates of dispersal upon the coevolution of male-harm and female-resistance behaviours. But the fitness functions derived in Rankin's study do not flow from his model's assumptions and, in particular, are not consistent with sex-biased dispersal. Here, we develop new fitness functions that do logically flow from the model's assumptions, to determine the impact of sex-specific patterns of dispersal on the evolution of sexual conflict. Although Rankin's study suggested that increasing male dispersal always promotes the evolution of male harm and that increasing female dispersal always inhibits the evolution of male harm, we find that the opposite can also be true, depending upon parameter values. © 2015 The Authors. Journal of Evolutionary Biology published by John Wiley & Sons Ltd on behalf of European Society for Evolutionary Biology.

  3. No control genes required: Bayesian analysis of qRT-PCR data.

    PubMed

    Matz, Mikhail V; Wright, Rachel M; Scott, James G

    2013-01-01

    Model-based analysis of data from quantitative reverse-transcription PCR (qRT-PCR) is potentially more powerful and versatile than traditional methods. Yet existing model-based approaches cannot properly deal with the higher sampling variances associated with low-abundant targets, nor do they provide a natural way to incorporate assumptions about the stability of control genes directly into the model-fitting process. In our method, raw qPCR data are represented as molecule counts, and described using generalized linear mixed models under Poisson-lognormal error. A Markov Chain Monte Carlo (MCMC) algorithm is used to sample from the joint posterior distribution over all model parameters, thereby estimating the effects of all experimental factors on the expression of every gene. The Poisson-based model allows for the correct specification of the mean-variance relationship of the PCR amplification process, and can also glean information from instances of no amplification (zero counts). Our method is very flexible with respect to control genes: any prior knowledge about the expected degree of their stability can be directly incorporated into the model. Yet the method provides sensible answers without such assumptions, or even in the complete absence of control genes. We also present a natural Bayesian analogue of the "classic" analysis, which uses standard data pre-processing steps (logarithmic transformation and multi-gene normalization) but estimates all gene expression changes jointly within a single model. The new methods are considerably more flexible and powerful than the standard delta-delta Ct analysis based on pairwise t-tests. Our methodology expands the applicability of the relative-quantification analysis protocol all the way to the lowest-abundance targets, and provides a novel opportunity to analyze qRT-PCR data without making any assumptions concerning target stability. These procedures have been implemented as the MCMC.qpcr package in R.

  4. Evaluating Model-Driven Development for large-scale EHRs through the openEHR approach.

    PubMed

    Christensen, Bente; Ellingsen, Gunnar

    2016-05-01

    In healthcare, the openEHR standard is a promising Model-Driven Development (MDD) approach for electronic healthcare records. This paper aims to identify key socio-technical challenges when the openEHR approach is put to use in Norwegian hospitals. More specifically, key fundamental assumptions are investigated empirically. These assumptions promise a clear separation of technical and domain concerns, users being in control of the modelling process, and widespread user commitment. Finally, these assumptions promise an easy way to model and map complex organizations. This longitudinal case study is based on an interpretive approach, whereby data were gathered through 440h of participant observation, 22 semi-structured interviews and extensive document studies over 4 years. The separation of clinical and technical concerns seemed to be aspirational, because both designing the technical system and modelling the domain required technical and clinical competence. Hence developers and clinicians found themselves working together in both arenas. User control and user commitment seemed not to apply in large-scale projects, as modelling the domain turned out to be too complicated and hence to appeal only to especially interested users worldwide, not the local end-users. Modelling proved to be a complex standardization process that shaped both the actual modelling and healthcare practice itself. A broad assemblage of contributors seems to be needed for developing an archetype-based system, in which roles, responsibilities and contributions cannot be clearly defined and delimited. The way MDD occurs has implications for medical practice per se in the form of the need to standardize practices to ensure that medical concepts are uniform across practices. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. NGC1300 dynamics - II. The response models

    NASA Astrophysics Data System (ADS)

    Kalapotharakos, C.; Patsis, P. A.; Grosbøl, P.

    2010-10-01

    We study the stellar response in a spectrum of potentials describing the barred spiral galaxy NGC1300. These potentials have been presented in a previous paper and correspond to three different assumptions as regards the geometry of the galaxy. For each potential we consider a wide range of Ωp pattern speed values. Our goal is to discover the geometries and the Ωp supporting specific morphological features of NGC1300. For this purpose we use the method of response models. In order to compare the images of NGC1300 with the density maps of our models, we define a new index which is a generalization of the Hausdorff distance. This index helps us to find out quantitatively which cases reproduce specific features of NGC1300 in an objective way. Furthermore, we construct alternative models following a Schwarzschild-type technique. By this method we vary the weights of the various energy levels, and thus the orbital contribution of each energy, in order to minimize the differences between the response density and that deduced from the surface density of the galaxy, under certain assumptions. We find that the models corresponding to Ωp ~ 16 and 22 kms-1kpc-1 are able to reproduce efficiently certain morphological features of NGC1300, with each one having its advantages and drawbacks. Based on observations collected at the European Southern Observatory, Chile: programme ESO 69.A-0021. E-mail: ckalapot@phys.uoa.gr (CK); patsis@academyofathens.gr (PAP); pgrosbol@eso.org (PG)

  6. Comparing the Performance of Approaches for Testing the Homogeneity of Variance Assumption in One-Factor ANOVA Models

    ERIC Educational Resources Information Center

    Wang, Yan; Rodríguez de Gil, Patricia; Chen, Yi-Hsin; Kromrey, Jeffrey D.; Kim, Eun Sook; Pham, Thanh; Nguyen, Diep; Romano, Jeanine L.

    2017-01-01

    Various tests to check the homogeneity of variance assumption have been proposed in the literature, yet there is no consensus as to their robustness when the assumption of normality does not hold. This simulation study evaluated the performance of 14 tests for the homogeneity of variance assumption in one-way ANOVA models in terms of Type I error…

  7. A comparison between EGS4 and MCNP computer modeling of an in vivo X-ray fluorescence system.

    PubMed

    Al-Ghorabie, F H; Natto, S S; Al-Lyhiani, S H

    2001-03-01

    The Monte Carlo computer codes EGS4 and MCNP were used to develop a theoretical model of a 180 degrees geometry in vivo X-ray fluorescence system for the measurement of platinum concentration in head and neck tumors. The model included specification of the photon source, collimators, phantoms and detector. Theoretical results were compared and evaluated against X-ray fluorescence data obtained experimentally from an existing system developed by the Swansea In Vivo Analysis and Cancer Research Group. The EGS4 results agreed well with the MCNP results. However, agreement between the measured spectral shape obtained using the experimental X-ray fluorescence system and the simulated spectral shape obtained using the two Monte Carlo codes was relatively poor. The main reason for the disagreement between the results arises from the basic assumptions which the two codes used in their calculations. Both codes assume a "free" electron model for Compton interactions. This assumption will underestimate the results and invalidates any predicted and experimental spectra when compared with each other.

  8. Modelling the epidemiology of Escherichia coli ST131 and the impact of interventions on the community and healthcare centres.

    PubMed

    Talaminos, A; López-Cerero, L; Calvillo, J; Pascual, A; Roa, L M; Rodríguez-Baño, J

    2016-07-01

    ST131 Escherichia coli is an emergent clonal group that has achieved successful worldwide spread through a combination of virulence and antimicrobial resistance. Our aim was to develop a mathematical model, based on current knowledge of the epidemiology of ESBL-producing and non-ESBL-producing ST131 E. coli, to provide a framework enabling a better understanding of its spread within the community, in hospitals and long-term care facilities, and the potential impact of specific interventions on the rates of infection. A model belonging to the SEIS (Susceptible-Exposed-Infected-Susceptible) class of compartmental models, with specific modifications, was developed. Quantification of the model is based on the law of mass preservation, which helps determine the relationships between flows of individuals and different compartments. Quantification is deterministic or probabilistic depending on subpopulation size. The assumptions for the model are based on several developed epidemiological studies. Based on the assumptions of the model, an intervention capable of sustaining a 25% reduction in person-to-person transmission shows a significant reduction in the rate of infections caused by ST131; the impact is higher for non-ESBL-producing ST131 isolates than for ESBL producers. On the other hand, an isolated intervention reducing exposure to antimicrobial agents has much more limited impact on the rate of ST131 infection. Our results suggest that interventions achieving a continuous reduction in the transmission of ST131 in households, nursing homes and hospitals offer the best chance of reducing the burden of the infections caused by these isolates.

  9. Inherent limitations of probabilistic models for protein-DNA binding specificity

    PubMed Central

    Ruan, Shuxiang

    2017-01-01

    The specificities of transcription factors are most commonly represented with probabilistic models. These models provide a probability for each base occurring at each position within the binding site and the positions are assumed to contribute independently. The model is simple and intuitive and is the basis for many motif discovery algorithms. However, the model also has inherent limitations that prevent it from accurately representing true binding probabilities, especially for the highest affinity sites under conditions of high protein concentration. The limitations are not due to the assumption of independence between positions but rather are caused by the non-linear relationship between binding affinity and binding probability and the fact that independent normalization at each position skews the site probabilities. Generally probabilistic models are reasonably good approximations, but new high-throughput methods allow for biophysical models with increased accuracy that should be used whenever possible. PMID:28686588

  10. Critical appraisal of assumptions in chains of model calculations used to project local climate impacts for adaptation decision support—the case of Baakse Beek

    NASA Astrophysics Data System (ADS)

    van der Sluijs, Jeroen P.; Arjan Wardekker, J.

    2015-04-01

    In order to enable anticipation and proactive adaptation, local decision makers increasingly seek detailed foresight about regional and local impacts of climate change. To this end, the Netherlands Models and Data-Centre implemented a pilot chain of sequentially linked models to project local climate impacts on hydrology, agriculture and nature under different national climate scenarios for a small region in the east of the Netherlands named Baakse Beek. The chain of models sequentially linked in that pilot includes a (future) weather generator and models of respectively subsurface hydrogeology, ground water stocks and flows, soil chemistry, vegetation development, crop yield and nature quality. These models typically have mismatching time step sizes and grid cell sizes. The linking of these models unavoidably involves the making of model assumptions that can hardly be validated, such as those needed to bridge the mismatches in spatial and temporal scales. Here we present and apply a method for the systematic critical appraisal of model assumptions that seeks to identify and characterize the weakest assumptions in a model chain. The critical appraisal of assumptions presented in this paper has been carried out ex-post. For the case of the climate impact model chain for Baakse Beek, the three most problematic assumptions were found to be: land use and land management kept constant over time; model linking of (daily) ground water model output to the (yearly) vegetation model around the root zone; and aggregation of daily output of the soil hydrology model into yearly input of a so called ‘mineralization reduction factor’ (calculated from annual average soil pH and daily soil hydrology) in the soil chemistry model. Overall, the method for critical appraisal of model assumptions presented and tested in this paper yields a rich qualitative insight in model uncertainty and model quality. It promotes reflectivity and learning in the modelling community, and leads to well informed recommendations for model improvement.

  11. Design and validation of diffusion MRI models of white matter

    NASA Astrophysics Data System (ADS)

    Jelescu, Ileana O.; Budde, Matthew D.

    2017-11-01

    Diffusion MRI is arguably the method of choice for characterizing white matter microstructure in vivo. Over the typical duration of diffusion encoding, the displacement of water molecules is conveniently on a length scale similar to that of the underlying cellular structures. Moreover, water molecules in white matter are largely compartmentalized which enables biologically-inspired compartmental diffusion models to characterize and quantify the true biological microstructure. A plethora of white matter models have been proposed. However, overparameterization and mathematical fitting complications encourage the introduction of simplifying assumptions that vary between different approaches. These choices impact the quantitative estimation of model parameters with potential detriments to their biological accuracy and promised specificity. First, we review biophysical white matter models in use and recapitulate their underlying assumptions and realms of applicability. Second, we present up-to-date efforts to validate parameters estimated from biophysical models. Simulations and dedicated phantoms are useful in assessing the performance of models when the ground truth is known. However, the biggest challenge remains the validation of the “biological accuracy” of estimated parameters. Complementary techniques such as microscopy of fixed tissue specimens have facilitated direct comparisons of estimates of white matter fiber orientation and densities. However, validation of compartmental diffusivities remains challenging, and complementary MRI-based techniques such as alternative diffusion encodings, compartment-specific contrast agents and metabolites have been used to validate diffusion models. Finally, white matter injury and disease pose additional challenges to modeling, which are also discussed. This review aims to provide an overview of the current state of models and their validation and to stimulate further research in the field to solve the remaining open questions and converge towards consensus.

  12. Design and validation of diffusion MRI models of white matter

    PubMed Central

    Jelescu, Ileana O.; Budde, Matthew D.

    2018-01-01

    Diffusion MRI is arguably the method of choice for characterizing white matter microstructure in vivo. Over the typical duration of diffusion encoding, the displacement of water molecules is conveniently on a length scale similar to that of the underlying cellular structures. Moreover, water molecules in white matter are largely compartmentalized which enables biologically-inspired compartmental diffusion models to characterize and quantify the true biological microstructure. A plethora of white matter models have been proposed. However, overparameterization and mathematical fitting complications encourage the introduction of simplifying assumptions that vary between different approaches. These choices impact the quantitative estimation of model parameters with potential detriments to their biological accuracy and promised specificity. First, we review biophysical white matter models in use and recapitulate their underlying assumptions and realms of applicability. Second, we present up-to-date efforts to validate parameters estimated from biophysical models. Simulations and dedicated phantoms are useful in assessing the performance of models when the ground truth is known. However, the biggest challenge remains the validation of the “biological accuracy” of estimated parameters. Complementary techniques such as microscopy of fixed tissue specimens have facilitated direct comparisons of estimates of white matter fiber orientation and densities. However, validation of compartmental diffusivities remains challenging, and complementary MRI-based techniques such as alternative diffusion encodings, compartment-specific contrast agents and metabolites have been used to validate diffusion models. Finally, white matter injury and disease pose additional challenges to modeling, which are also discussed. This review aims to provide an overview of the current state of models and their validation and to stimulate further research in the field to solve the remaining open questions and converge towards consensus. PMID:29755979

  13. Flexible modeling improves assessment of prognostic value of C-reactive protein in advanced non-small cell lung cancer.

    PubMed

    Gagnon, B; Abrahamowicz, M; Xiao, Y; Beauchamp, M-E; MacDonald, N; Kasymjanova, G; Kreisman, H; Small, D

    2010-03-30

    C-reactive protein (CRP) is gaining credibility as a prognostic factor in different cancers. Cox's proportional hazard (PH) model is usually used to assess prognostic factors. However, this model imposes a priori assumptions, which are rarely tested, that (1) the hazard ratio associated with each prognostic factor remains constant across the follow-up (PH assumption) and (2) the relationship between a continuous predictor and the logarithm of the mortality hazard is linear (linearity assumption). We tested these two assumptions of the Cox's PH model for CRP, using a flexible statistical model, while adjusting for other known prognostic factors, in a cohort of 269 patients newly diagnosed with non-small cell lung cancer (NSCLC). In the Cox's PH model, high CRP increased the risk of death (HR=1.11 per each doubling of CRP value, 95% CI: 1.03-1.20, P=0.008). However, both the PH assumption (P=0.033) and the linearity assumption (P=0.015) were rejected for CRP, measured at the initiation of chemotherapy, which kept its prognostic value for approximately 18 months. Our analysis shows that flexible modeling provides new insights regarding the value of CRP as a prognostic factor in NSCLC and that Cox's PH model underestimates early risks associated with high CRP.

  14. Statistical power to detect violation of the proportional hazards assumption when using the Cox regression model.

    PubMed

    Austin, Peter C

    2018-01-01

    The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest.

  15. Statistical power to detect violation of the proportional hazards assumption when using the Cox regression model

    PubMed Central

    Austin, Peter C.

    2017-01-01

    The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest. PMID:29321694

  16. On Nomological Validity and Auxiliary Assumptions: The Importance of Simultaneously Testing Effects in Social Cognitive Theories Applied to Health Behavior and Some Guidelines

    PubMed Central

    Hagger, Martin S.; Gucciardi, Daniel F.; Chatzisarantis, Nikos L. D.

    2017-01-01

    Tests of social cognitive theories provide informative data on the factors that relate to health behavior, and the processes and mechanisms involved. In the present article, we contend that tests of social cognitive theories should adhere to the principles of nomological validity, defined as the degree to which predictions in a formal theoretical network are confirmed. We highlight the importance of nomological validity tests to ensure theory predictions can be disconfirmed through observation. We argue that researchers should be explicit on the conditions that lead to theory disconfirmation, and identify any auxiliary assumptions on which theory effects may be conditional. We contend that few researchers formally test the nomological validity of theories, or outline conditions that lead to model rejection and the auxiliary assumptions that may explain findings that run counter to hypotheses, raising potential for ‘falsification evasion.’ We present a brief analysis of studies (k = 122) testing four key social cognitive theories in health behavior to illustrate deficiencies in reporting theory tests and evaluations of nomological validity. Our analysis revealed that few articles report explicit statements suggesting that their findings support or reject the hypotheses of the theories tested, even when findings point to rejection. We illustrate the importance of explicit a priori specification of fundamental theory hypotheses and associated auxiliary assumptions, and identification of the conditions which would lead to rejection of theory predictions. We also demonstrate the value of confirmatory analytic techniques, meta-analytic structural equation modeling, and Bayesian analyses in providing robust converging evidence for nomological validity. We provide a set of guidelines for researchers on how to adopt and apply the nomological validity approach to testing health behavior models. PMID:29163307

  17. Linking normative models of natural tasks to descriptive models of neural response.

    PubMed

    Jaini, Priyank; Burge, Johannes

    2017-10-01

    Understanding how nervous systems exploit task-relevant properties of sensory stimuli to perform natural tasks is fundamental to the study of perceptual systems. However, there are few formal methods for determining which stimulus properties are most useful for a given natural task. As a consequence, it is difficult to develop principled models for how to compute task-relevant latent variables from natural signals, and it is difficult to evaluate descriptive models fit to neural response. Accuracy maximization analysis (AMA) is a recently developed Bayesian method for finding the optimal task-specific filters (receptive fields). Here, we introduce AMA-Gauss, a new faster form of AMA that incorporates the assumption that the class-conditional filter responses are Gaussian distributed. Then, we use AMA-Gauss to show that its assumptions are justified for two fundamental visual tasks: retinal speed estimation and binocular disparity estimation. Next, we show that AMA-Gauss has striking formal similarities to popular quadratic models of neural response: the energy model and the generalized quadratic model (GQM). Together, these developments deepen our understanding of why the energy model of neural response have proven useful, improve our ability to evaluate results from subunit model fits to neural data, and should help accelerate psychophysics and neuroscience research with natural stimuli.

  18. Influence of model assumptions about HIV disease progression after initiating or stopping treatment on estimates of infections and deaths averted by scaling up antiretroviral therapy

    PubMed Central

    Sucharitakul, Kanes; Boily, Marie-Claude; Dimitrov, Dobromir

    2018-01-01

    Background Many mathematical models have investigated the population-level impact of expanding antiretroviral therapy (ART), using different assumptions about HIV disease progression on ART and among ART dropouts. We evaluated the influence of these assumptions on model projections of the number of infections and deaths prevented by expanded ART. Methods A new dynamic model of HIV transmission among men who have sex with men (MSM) was developed, which incorporated each of four alternative assumptions about disease progression used in previous models: (A) ART slows disease progression; (B) ART halts disease progression; (C) ART reverses disease progression by increasing CD4 count; (D) ART reverses disease progression, but disease progresses rapidly once treatment is stopped. The model was independently calibrated to HIV prevalence and ART coverage data from the United States under each progression assumption in turn. New HIV infections and HIV-related deaths averted over 10 years were compared for fixed ART coverage increases. Results Little absolute difference (<7 percentage points (pp)) in HIV infections averted over 10 years was seen between progression assumptions for the same increases in ART coverage (varied between 33% and 90%) if ART dropouts reinitiated ART at the same rate as ART-naïve MSM. Larger differences in the predicted fraction of HIV-related deaths averted were observed (up to 15pp). However, if ART dropouts could only reinitiate ART at CD4<200 cells/μl, assumption C predicted substantially larger fractions of HIV infections and deaths averted than other assumptions (up to 20pp and 37pp larger, respectively). Conclusion Different disease progression assumptions on and post-ART interruption did not affect the fraction of HIV infections averted with expanded ART, unless ART dropouts only re-initiated ART at low CD4 counts. Different disease progression assumptions had a larger influence on the fraction of HIV-related deaths averted with expanded ART. PMID:29554136

  19. Non-stationary noise estimation using dictionary learning and Gaussian mixture models

    NASA Astrophysics Data System (ADS)

    Hughes, James M.; Rockmore, Daniel N.; Wang, Yang

    2014-02-01

    Stationarity of the noise distribution is a common assumption in image processing. This assumption greatly simplifies denoising estimators and other model parameters and consequently assuming stationarity is often a matter of convenience rather than an accurate model of noise characteristics. The problematic nature of this assumption is exacerbated in real-world contexts, where noise is often highly non-stationary and can possess time- and space-varying characteristics. Regardless of model complexity, estimating the parameters of noise dis- tributions in digital images is a difficult task, and estimates are often based on heuristic assumptions. Recently, sparse Bayesian dictionary learning methods were shown to produce accurate estimates of the level of additive white Gaussian noise in images with minimal assumptions. We show that a similar model is capable of accu- rately modeling certain kinds of non-stationary noise processes, allowing for space-varying noise in images to be estimated, detected, and removed. We apply this modeling concept to several types of non-stationary noise and demonstrate the model's effectiveness on real-world problems, including denoising and segmentation of images according to noise characteristics, which has applications in image forensics.

  20. The ratio of NPP to GPP: evidence of change over the course of stand development.

    PubMed

    Mäkelä, A; Valentine, H T

    2001-09-01

    Using Scots pine (Pinus sylvestris L.) in Fenno-Scandia as a case study, we investigate whether net primary production (NPP) and maintenance respiration are constant fractions of gross primary production (GPP) as even-aged mono-specific stands progress from initiation to old age. A model of the ratio of NPP to GPP is developed based on (1) the classical model of respiration, which divides total respiration into construction and maintenance components, and (2) a process-based model, which derives respiration from processes including construction, nitrate uptake and reduction, ion uptake, phloem loading and maintenance. Published estimates of specific respiration and production rates, and some recent measurements of components of dry matter in stands of different ages, are used to quantify the two approaches over the course of stand development in an average environment. Both approaches give similar results, showing a decrease in the NPP/GPP ratio with increasing tree height. In addition, we show that stand-growth models fitted under three different sets of assumptions-(i) annual specific rates of maintenance respiration of sapwood (mW) and photosynthesis (sC) are constant; (ii) m(W) is constant, but sC decreases with increasing tree height; and (iii) total maintenance respiration is a constant fraction of GPP and s(C) decreases with increasing tree height-can lead to nearly identical model projections that agree with empirical observations of NPP and stand-growth variables. Remeasurements of GPP and respiration over time in chronosequences of stands may be needed to discern which set of assumptions is correct. Total (construction + maintenance) sapwood respiration per unit mass of sapwood (kg C (kg C year)-1) decreased with increasing stand age, sapwood stock, and average tree height under all three assumptions. However, total sapwood respiration (kg C (ha year)-1) increased over the course of stand development under (i) and (ii), contributing to a downward trend in the time course of the NPP/GPP ratio after closure. A moderate decrease in mW with increasing tree height or sapwood cross-sectional area had little effect on the downward trend. On the basis of this evidence, we argue that a significant decline in the NPP/GPP ratio with tree size or age seems highly probable, although the decline may appear insignificant over some segments of stand development. We also argue that, because stand-growth models can give correct answers for the wrong reasons, statistical calibration of such models should be avoided whenever possible; instead, values of physiological parameters should come from measurements of the physiological processes themselves.

  1. Causal inference with missing exposure information: Methods and applications to an obstetric study.

    PubMed

    Zhang, Zhiwei; Liu, Wei; Zhang, Bo; Tang, Li; Zhang, Jun

    2016-10-01

    Causal inference in observational studies is frequently challenged by the occurrence of missing data, in addition to confounding. Motivated by the Consortium on Safe Labor, a large observational study of obstetric labor practice and birth outcomes, this article focuses on the problem of missing exposure information in a causal analysis of observational data. This problem can be approached from different angles (i.e. missing covariates and causal inference), and useful methods can be obtained by drawing upon the available techniques and insights in both areas. In this article, we describe and compare a collection of methods based on different modeling assumptions, under standard assumptions for missing data (i.e. missing-at-random and positivity) and for causal inference with complete data (i.e. no unmeasured confounding and another positivity assumption). These methods involve three models: one for treatment assignment, one for the dependence of outcome on treatment and covariates, and one for the missing data mechanism. In general, consistent estimation of causal quantities requires correct specification of at least two of the three models, although there may be some flexibility as to which two models need to be correct. Such flexibility is afforded by doubly robust estimators adapted from the missing covariates literature and the literature on causal inference with complete data, and by a newly developed triply robust estimator that is consistent if any two of the three models are correct. The methods are applied to the Consortium on Safe Labor data and compared in a simulation study mimicking the Consortium on Safe Labor. © The Author(s) 2013.

  2. Do causal concentration-response functions exist? A critical review of associational and causal relations between fine particulate matter and mortality.

    PubMed

    Cox, Louis Anthony Tony

    2017-08-01

    Concentration-response (C-R) functions relating concentrations of pollutants in ambient air to mortality risks or other adverse health effects provide the basis for many public health risk assessments, benefits estimates for clean air regulations, and recommendations for revisions to existing air quality standards. The assumption that C-R functions relating levels of exposure and levels of response estimated from historical data usefully predict how future changes in concentrations would change risks has seldom been carefully tested. This paper critically reviews literature on C-R functions for fine particulate matter (PM2.5) and mortality risks. We find that most of them describe historical associations rather than valid causal models for predicting effects of interventions that change concentrations. The few papers that explicitly attempt to model causality rely on unverified modeling assumptions, casting doubt on their predictions about effects of interventions. A large literature on modern causal inference algorithms for observational data has been little used in C-R modeling. Applying these methods to publicly available data from Boston and the South Coast Air Quality Management District around Los Angeles shows that C-R functions estimated for one do not hold for the other. Changes in month-specific PM2.5 concentrations from one year to the next do not help to predict corresponding changes in average elderly mortality rates in either location. Thus, the assumption that estimated C-R relations predict effects of pollution-reducing interventions may not be true. Better causal modeling methods are needed to better predict how reducing air pollution would affect public health.

  3. The Teacher, the Physician and the Person: Exploring Causal Connections between Teaching Performance and Role Model Types Using Directed Acyclic Graphs

    PubMed Central

    Boerebach, Benjamin C. M.; Lombarts, Kiki M. J. M. H.; Scherpbier, Albert J. J.; Arah, Onyebuchi A.

    2013-01-01

    Background In fledgling areas of research, evidence supporting causal assumptions is often scarce due to the small number of empirical studies conducted. In many studies it remains unclear what impact explicit and implicit causal assumptions have on the research findings; only the primary assumptions of the researchers are often presented. This is particularly true for research on the effect of faculty’s teaching performance on their role modeling. Therefore, there is a need for robust frameworks and methods for transparent formal presentation of the underlying causal assumptions used in assessing the causal effects of teaching performance on role modeling. This study explores the effects of different (plausible) causal assumptions on research outcomes. Methods This study revisits a previously published study about the influence of faculty’s teaching performance on their role modeling (as teacher-supervisor, physician and person). We drew eight directed acyclic graphs (DAGs) to visually represent different plausible causal relationships between the variables under study. These DAGs were subsequently translated into corresponding statistical models, and regression analyses were performed to estimate the associations between teaching performance and role modeling. Results The different causal models were compatible with major differences in the magnitude of the relationship between faculty’s teaching performance and their role modeling. Odds ratios for the associations between teaching performance and the three role model types ranged from 31.1 to 73.6 for the teacher-supervisor role, from 3.7 to 15.5 for the physician role, and from 2.8 to 13.8 for the person role. Conclusions Different sets of assumptions about causal relationships in role modeling research can be visually depicted using DAGs, which are then used to guide both statistical analysis and interpretation of results. Since study conclusions can be sensitive to different causal assumptions, results should be interpreted in the light of causal assumptions made in each study. PMID:23936020

  4. Dynamic option pricing with endogenous stochastic arbitrage

    NASA Astrophysics Data System (ADS)

    Contreras, Mauricio; Montalva, Rodrigo; Pellicer, Rely; Villena, Marcelo

    2010-09-01

    Only few efforts have been made in order to relax one of the key assumptions of the Black-Scholes model: the no-arbitrage assumption. This is despite the fact that arbitrage processes usually exist in the real world, even though they tend to be short-lived. The purpose of this paper is to develop an option pricing model with endogenous stochastic arbitrage, capable of modelling in a general fashion any future and underlying asset that deviate itself from its market equilibrium. Thus, this investigation calibrates empirically the arbitrage on the futures on the S&P 500 index using transaction data from September 1997 to June 2009, from here a specific type of arbitrage called “arbitrage bubble”, based on a t-step function, is identified and hence used in our model. The theoretical results obtained for Binary and European call options, for this kind of arbitrage, show that an investment strategy that takes advantage of the identified arbitrage possibility can be defined, whenever it is possible to anticipate in relative terms the amplitude and timespan of the process. Finally, the new trajectory of the stock price is analytically estimated for a specific case of arbitrage and some numerical illustrations are developed. We find that the consequences of a finite and small endogenous arbitrage not only change the trajectory of the asset price during the period when it started, but also after the arbitrage bubble has already gone. In this context, our model will allow us to calibrate the B-S model to that new trajectory even when the arbitrage already started.

  5. Three-class ROC analysis--the equal error utility assumption and the optimality of three-class ROC surface using the ideal observer.

    PubMed

    He, Xin; Frey, Eric C

    2006-08-01

    Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.

  6. Estimating stage-specific daily survival probabilities of nests when nest age is unknown

    USGS Publications Warehouse

    Stanley, T.R.

    2004-01-01

    Estimation of daily survival probabilities of nests is common in studies of avian populations. Since the introduction of Mayfield's (1961, 1975) estimator, numerous models have been developed to relax Mayfield's assumptions and account for biologically important sources of variation. Stanley (2000) presented a model for estimating stage-specific (e.g. incubation stage, nestling stage) daily survival probabilities of nests that conditions on “nest type” and requires that nests be aged when they are found. Because aging nests typically requires handling the eggs, there may be situations where nests can not or should not be aged and the Stanley (2000) model will be inapplicable. Here, I present a model for estimating stage-specific daily survival probabilities that conditions on nest stage for active nests, thereby obviating the need to age nests when they are found. Specifically, I derive the maximum likelihood function for the model, evaluate the model's performance using Monte Carlo simulations, and provide software for estimating parameters (along with an example). For sample sizes as low as 50 nests, bias was small and confidence interval coverage was close to the nominal rate, especially when a reduced-parameter model was used for estimation.

  7. A Critical Discussion of Deep and Surface Processing: What It Means, How It Is Measured, the Role of Context, and Model Specification

    ERIC Educational Resources Information Center

    Dinsmore, Daniel L.; Alexander, Patricia A.

    2012-01-01

    The prevailing assumption by some that deep processing promotes stronger learning outcomes while surface processing promotes weaker learning outcomes has been called into question by the inconsistency and ambiguity of results in investigations of the relation between levels of processing and performance. The purpose of this literature review is to…

  8. Variability and uncertainty in life cycle assessment models for greenhouse gas emissions from Canadian oil sands production.

    PubMed

    Brandt, Adam R

    2012-01-17

    Because of interest in greenhouse gas (GHG) emissions from transportation fuels production, a number of recent life cycle assessment (LCA) studies have calculated GHG emissions from oil sands extraction, upgrading, and refining pathways. The results from these studies vary considerably. This paper reviews factors affecting energy consumption and GHG emissions from oil sands extraction. It then uses publicly available data to analyze the assumptions made in the LCA models to better understand the causes of variability in emissions estimates. It is found that the variation in oil sands GHG estimates is due to a variety of causes. In approximate order of importance, these are scope of modeling and choice of projects analyzed (e.g., specific projects vs industry averages); differences in assumed energy intensities of extraction and upgrading; differences in the fuel mix assumptions; treatment of secondary noncombustion emissions sources, such as venting, flaring, and fugitive emissions; and treatment of ecological emissions sources, such as land-use change-associated emissions. The GHGenius model is recommended as the LCA model that is most congruent with reported industry average data. GHGenius also has the most comprehensive system boundaries. Last, remaining uncertainties and future research needs are discussed.

  9. Farms, Families, and Markets: New Evidence on Completeness of Markets in Agricultural Settings

    PubMed Central

    LaFave, Daniel; Thomas, Duncan

    2016-01-01

    The farm household model has played a central role in improving the understanding of small-scale agricultural households and non-farm enterprises. Under the assumptions that all current and future markets exist and that farmers treat all prices as given, the model simplifies households’ simultaneous production and consumption decisions into a recursive form in which production can be treated as independent of preferences of household members. These assumptions, which are the foundation of a large literature in labor and development, have been tested and not rejected in several important studies including Benjamin (1992). Using multiple waves of longitudinal survey data from Central Java, Indonesia, this paper tests a key prediction of the recursive model: demand for farm labor is unrelated to the demographic composition of the farm household. The prediction is unambiguously rejected. The rejection cannot be explained by contamination due to unobserved heterogeneity that is fixed at the farm level, local area shocks or farm-specific shocks that affect changes in household composition and farm labor demand. We conclude that the recursive form of the farm household model is not consistent with the data. Developing empirically tractable models of farm households when markets are incomplete remains an important challenge. PMID:27688430

  10. A clinical trial design using the concept of proportional time using the generalized gamma ratio distribution.

    PubMed

    Phadnis, Milind A; Wetmore, James B; Mayo, Matthew S

    2017-11-20

    Traditional methods of sample size and power calculations in clinical trials with a time-to-event end point are based on the logrank test (and its variations), Cox proportional hazards (PH) assumption, or comparison of means of 2 exponential distributions. Of these, sample size calculation based on PH assumption is likely the most common and allows adjusting for the effect of one or more covariates. However, when designing a trial, there are situations when the assumption of PH may not be appropriate. Additionally, when it is known that there is a rapid decline in the survival curve for a control group, such as from previously conducted observational studies, a design based on the PH assumption may confer only a minor statistical improvement for the treatment group that is neither clinically nor practically meaningful. For such scenarios, a clinical trial design that focuses on improvement in patient longevity is proposed, based on the concept of proportional time using the generalized gamma ratio distribution. Simulations are conducted to evaluate the performance of the proportional time method and to identify the situations in which such a design will be beneficial as compared to the standard design using a PH assumption, piecewise exponential hazards assumption, and specific cases of a cure rate model. A practical example in which hemorrhagic stroke patients are randomized to 1 of 2 arms in a putative clinical trial demonstrates the usefulness of this approach by drastically reducing the number of patients needed for study enrollment. Copyright © 2017 John Wiley & Sons, Ltd.

  11. Dissipative dark matter halos: The steady state solution

    NASA Astrophysics Data System (ADS)

    Foot, R.

    2018-02-01

    Dissipative dark matter, where dark matter particle properties closely resemble familiar baryonic matter, is considered. Mirror dark matter, which arises from an isomorphic hidden sector, is a specific and theoretically constrained scenario. Other possibilities include models with more generic hidden sectors that contain massless dark photons [unbroken U (1 ) gauge interactions]. Such dark matter not only features dissipative cooling processes but also is assumed to have nontrivial heating sourced by ordinary supernovae (facilitated by the kinetic mixing interaction). The dynamics of dissipative dark matter halos around rotationally supported galaxies, influenced by heating as well as cooling processes, can be modeled by fluid equations. For a sufficiently isolated galaxy with a stable star formation rate, the dissipative dark matter halos are expected to evolve to a steady state configuration which is in hydrostatic equilibrium and where heating and cooling rates locally balance. Here, we take into account the major cooling and heating processes, and numerically solve for the steady state solution under the assumptions of spherical symmetry, negligible dark magnetic fields, and that supernova sourced energy is transported to the halo via dark radiation. For the parameters considered, and assumptions made, we were unable to find a physically realistic solution for the constrained case of mirror dark matter halos. Halo cooling generally exceeds heating at realistic halo mass densities. This problem can be rectified in more generic dissipative dark matter models, and we discuss a specific example in some detail.

  12. Flexible Mediation Analysis With Multiple Mediators.

    PubMed

    Steen, Johan; Loeys, Tom; Moerkerke, Beatrijs; Vansteelandt, Stijn

    2017-07-15

    The advent of counterfactual-based mediation analysis has triggered enormous progress on how, and under what assumptions, one may disentangle path-specific effects upon combining arbitrary (possibly nonlinear) models for mediator and outcome. However, current developments have largely focused on single mediators because required identification assumptions prohibit simple extensions to settings with multiple mediators that may depend on one another. In this article, we propose a procedure for obtaining fine-grained decompositions that may still be recovered from observed data in such complex settings. We first show that existing analytical approaches target specific instances of a more general set of decompositions and may therefore fail to provide a comprehensive assessment of the processes that underpin cause-effect relationships between exposure and outcome. We then outline conditions for obtaining the remaining set of decompositions. Because the number of targeted decompositions increases rapidly with the number of mediators, we introduce natural effects models along with estimation methods that allow for flexible and parsimonious modeling. Our procedure can easily be implemented using off-the-shelf software and is illustrated using a reanalysis of the World Health Organization's Large Analysis and Review of European Housing and Health Status (WHO-LARES) study on the effect of mold exposure on mental health (2002-2003). © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  13. Flexible modeling improves assessment of prognostic value of C-reactive protein in advanced non-small cell lung cancer

    PubMed Central

    Gagnon, B; Abrahamowicz, M; Xiao, Y; Beauchamp, M-E; MacDonald, N; Kasymjanova, G; Kreisman, H; Small, D

    2010-01-01

    Background: C-reactive protein (CRP) is gaining credibility as a prognostic factor in different cancers. Cox's proportional hazard (PH) model is usually used to assess prognostic factors. However, this model imposes a priori assumptions, which are rarely tested, that (1) the hazard ratio associated with each prognostic factor remains constant across the follow-up (PH assumption) and (2) the relationship between a continuous predictor and the logarithm of the mortality hazard is linear (linearity assumption). Methods: We tested these two assumptions of the Cox's PH model for CRP, using a flexible statistical model, while adjusting for other known prognostic factors, in a cohort of 269 patients newly diagnosed with non-small cell lung cancer (NSCLC). Results: In the Cox's PH model, high CRP increased the risk of death (HR=1.11 per each doubling of CRP value, 95% CI: 1.03–1.20, P=0.008). However, both the PH assumption (P=0.033) and the linearity assumption (P=0.015) were rejected for CRP, measured at the initiation of chemotherapy, which kept its prognostic value for approximately 18 months. Conclusion: Our analysis shows that flexible modeling provides new insights regarding the value of CRP as a prognostic factor in NSCLC and that Cox's PH model underestimates early risks associated with high CRP. PMID:20234363

  14. Comparison of analytical models for zonal flow generation in ion-temperature-gradient mode turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, J.; Miki, K.; Uzawa, K.

    2006-11-30

    During the past years the understanding of the multi scale interaction problems have increased significantly. However, at present there exists a flora of different analytical models for investigating multi scale interactions and hardly any specific comparisons have been performed among these models. In this work two different models for the generation of zonal flows from ion-temperature-gradient (ITG) background turbulence are discussed and compared. The methods used are the coherent mode coupling model and the wave kinetic equation model (WKE). It is shown that the two models give qualitatively the same results even though the assumption on the spectral difference ismore » used in the (WKE) approach.« less

  15. Post-closure biosphere assessment modelling: comparison of complex and more stylised approaches.

    PubMed

    Walke, Russell C; Kirchner, Gerald; Xu, Shulan; Dverstorp, Björn

    2015-10-01

    Geological disposal facilities are the preferred option for high-level radioactive waste, due to their potential to provide isolation from the surface environment (biosphere) on very long timescales. Assessments need to strike a balance between stylised models and more complex approaches that draw more extensively on site-specific information. This paper explores the relative merits of complex versus more stylised biosphere models in the context of a site-specific assessment. The more complex biosphere modelling approach was developed by the Swedish Nuclear Fuel and Waste Management Co (SKB) for the Formark candidate site for a spent nuclear fuel repository in Sweden. SKB's approach is built on a landscape development model, whereby radionuclide releases to distinct hydrological basins/sub-catchments (termed 'objects') are represented as they evolve through land rise and climate change. Each of seventeen of these objects is represented with more than 80 site specific parameters, with about 22 that are time-dependent and result in over 5000 input values per object. The more stylised biosphere models developed for this study represent releases to individual ecosystems without environmental change and include the most plausible transport processes. In the context of regulatory review of the landscape modelling approach adopted in the SR-Site assessment in Sweden, the more stylised representation has helped to build understanding in the more complex modelling approaches by providing bounding results, checking the reasonableness of the more complex modelling, highlighting uncertainties introduced through conceptual assumptions and helping to quantify the conservatisms involved. The more stylised biosphere models are also shown capable of reproducing the results of more complex approaches. A major recommendation is that biosphere assessments need to justify the degree of complexity in modelling approaches as well as simplifying and conservative assumptions. In light of the uncertainties concerning the biosphere on very long timescales, stylised biosphere models are shown to provide a useful point of reference in themselves and remain a valuable tool for nuclear waste disposal licencing procedures. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Arctic Ice Dynamics Joint Experiment (AIDJEX) assumptions revisited and found inadequate

    NASA Astrophysics Data System (ADS)

    Coon, Max; Kwok, Ron; Levy, Gad; Pruis, Matthew; Schreyer, Howard; Sulsky, Deborah

    2007-11-01

    This paper revisits the Arctic Ice Dynamics Joint Experiment (AIDJEX) assumptions about pack ice behavior with an eye to modeling sea ice dynamics. The AIDJEX assumptions were that (1) enough leads were present in a 100 km by 100 km region to make the ice isotropic on that scale; (2) the ice had no tensile strength; and (3) the ice behavior could be approximated by an isotropic yield surface. These assumptions were made during the development of the AIDJEX model in the 1970s, and are now found inadequate. The assumptions were made in part because of insufficient large-scale (10 km) deformation and stress data, and in part because of computer capability limitations. Upon reviewing deformation and stress data, it is clear that a model including deformation on discontinuities and an anisotropic failure surface with tension would better describe the behavior of pack ice. A model based on these assumptions is needed to represent the deformation and stress in pack ice on scales from 10 to 100 km, and would need to explicitly resolve discontinuities. Such a model would require a different class of metrics to validate discontinuities against observations.

  17. Designing occupancy studies when false-positive detections occur

    USGS Publications Warehouse

    Clement, Matthew

    2016-01-01

    1.Recently, estimators have been developed to estimate occupancy probabilities when false-positive detections occur during presence-absence surveys. Some of these estimators combine different types of survey data to improve estimates of occupancy. With these estimators, there is a tradeoff between the number of sample units surveyed, and the number and type of surveys at each sample unit. Guidance on efficient design of studies when false positives occur is unavailable. 2.For a range of scenarios, I identified survey designs that minimized the mean square error of the estimate of occupancy. I considered an approach that uses one survey method and two observation states and an approach that uses two survey methods. For each approach, I used numerical methods to identify optimal survey designs when model assumptions were met and parameter values were correctly anticipated, when parameter values were not correctly anticipated, and when the assumption of no unmodelled detection heterogeneity was violated. 3.Under the approach with two observation states, false positive detections increased the number of recommended surveys, relative to standard occupancy models. If parameter values could not be anticipated, pessimism about detection probabilities avoided poor designs. Detection heterogeneity could require more or fewer repeat surveys, depending on parameter values. If model assumptions were met, the approach with two survey methods was inefficient. However, with poor anticipation of parameter values, with detection heterogeneity, or with removal sampling schemes, combining two survey methods could improve estimates of occupancy. 4.Ignoring false positives can yield biased parameter estimates, yet false positives greatly complicate the design of occupancy studies. Specific guidance for major types of false-positive occupancy models, and for two assumption violations common in field data, can conserve survey resources. This guidance can be used to design efficient monitoring programs and studies of species occurrence, species distribution, or habitat selection, when false positives occur during surveys.

  18. Simulation-based sensitivity analysis for non-ignorably missing data.

    PubMed

    Yin, Peng; Shi, Jian Q

    2017-01-01

    Sensitivity analysis is popular in dealing with missing data problems particularly for non-ignorable missingness, where full-likelihood method cannot be adopted. It analyses how sensitively the conclusions (output) may depend on assumptions or parameters (input) about missing data, i.e. missing data mechanism. We call models with the problem of uncertainty sensitivity models. To make conventional sensitivity analysis more useful in practice we need to define some simple and interpretable statistical quantities to assess the sensitivity models and make evidence based analysis. We propose a novel approach in this paper on attempting to investigate the possibility of each missing data mechanism model assumption, by comparing the simulated datasets from various MNAR models with the observed data non-parametrically, using the K-nearest-neighbour distances. Some asymptotic theory has also been provided. A key step of this method is to plug in a plausibility evaluation system towards each sensitivity parameter, to select plausible values and reject unlikely values, instead of considering all proposed values of sensitivity parameters as in the conventional sensitivity analysis method. The method is generic and has been applied successfully to several specific models in this paper including meta-analysis model with publication bias, analysis of incomplete longitudinal data and mean estimation with non-ignorable missing data.

  19. An eco-hydrologic model of malaria outbreaks

    NASA Astrophysics Data System (ADS)

    Montosi, E.; Manzoni, S.; Porporato, A.; Montanari, A.

    2012-03-01

    Malaria is a geographically widespread infectious disease that is well known to be affected by climate variability at both seasonal and interannual timescales. In an effort to identify climatic factors that impact malaria dynamics, there has been considerable research focused on the development of appropriate disease models for malaria transmission and their consideration alongside climatic datasets. These analyses have focused largely on variation in temperature and rainfall as direct climatic drivers of malaria dynamics. Here, we further these efforts by considering additionally the role that soil water content may play in driving malaria incidence. Specifically, we hypothesize that hydro-climatic variability should be an important factor in controlling the availability of mosquito habitats, thereby governing mosquito growth rates. To test this hypothesis, we reduce a nonlinear eco-hydrologic model to a simple linear model through a series of consecutive assumptions and apply this model to malaria incidence data from three South African provinces. Despite the assumptions made in the reduction of the model, we show that soil water content can account for a significant portion of malaria's case variability beyond its seasonal patterns, whereas neither temperature nor rainfall alone can do so. Future work should therefore consider soil water content as a simple and computable variable for incorporation into climate-driven disease models of malaria and other vector-borne infectious diseases.

  20. An ecohydrological model of malaria outbreaks

    NASA Astrophysics Data System (ADS)

    Montosi, E.; Manzoni, S.; Porporato, A.; Montanari, A.

    2012-08-01

    Malaria is a geographically widespread infectious disease that is well known to be affected by climate variability at both seasonal and interannual timescales. In an effort to identify climatic factors that impact malaria dynamics, there has been considerable research focused on the development of appropriate disease models for malaria transmission driven by climatic time series. These analyses have focused largely on variation in temperature and rainfall as direct climatic drivers of malaria dynamics. Here, we further these efforts by considering additionally the role that soil water content may play in driving malaria incidence. Specifically, we hypothesize that hydro-climatic variability should be an important factor in controlling the availability of mosquito habitats, thereby governing mosquito growth rates. To test this hypothesis, we reduce a nonlinear ecohydrological model to a simple linear model through a series of consecutive assumptions and apply this model to malaria incidence data from three South African provinces. Despite the assumptions made in the reduction of the model, we show that soil water content can account for a significant portion of malaria's case variability beyond its seasonal patterns, whereas neither temperature nor rainfall alone can do so. Future work should therefore consider soil water content as a simple and computable variable for incorporation into climate-driven disease models of malaria and other vector-borne infectious diseases.

  1. Systematic Review of Model-Based Economic Evaluations of Treatments for Alzheimer's Disease.

    PubMed

    Hernandez, Luis; Ozen, Asli; DosSantos, Rodrigo; Getsios, Denis

    2016-07-01

    Numerous economic evaluations using decision-analytic models have assessed the cost effectiveness of treatments for Alzheimer's disease (AD) in the last two decades. It is important to understand the methods used in the existing models of AD and how they could impact results, as they could inform new model-based economic evaluations of treatments for AD. The aim of this systematic review was to provide a detailed description on the relevant aspects and components of existing decision-analytic models of AD, identifying areas for improvement and future development, and to conduct a quality assessment of the included studies. We performed a systematic and comprehensive review of cost-effectiveness studies of pharmacological treatments for AD published in the last decade (January 2005 to February 2015) that used decision-analytic models, also including studies considering patients with mild cognitive impairment (MCI). The background information of the included studies and specific information on the decision-analytic models, including their approach and components, assumptions, data sources, analyses, and results, were obtained from each study. A description of how the modeling approaches and assumptions differ across studies, identifying areas for improvement and future development, is provided. At the end, we present our own view of the potential future directions of decision-analytic models of AD and the challenges they might face. The included studies present a variety of different approaches, assumptions, and scope of decision-analytic models used in the economic evaluation of pharmacological treatments of AD. The major areas for improvement in future models of AD are to include domains of cognition, function, and behavior, rather than cognition alone; include a detailed description of how data used to model the natural course of disease progression were derived; state and justify the economic model selected and structural assumptions and limitations; provide a detailed (rather than high-level) description of the cost components included in the model; and report on the face-, internal-, and cross-validity of the model to strengthen the credibility and confidence in model results. The quality scores of most studies were rated as fair to good (average 87.5, range 69.5-100, in a scale of 0-100). Despite the advancements in decision-analytic models of AD, there remain several areas of improvement that are necessary to more appropriately and realistically capture the broad nature of AD and the potential benefits of treatments in future models of AD.

  2. Predictive performance models and multiple task performance

    NASA Technical Reports Server (NTRS)

    Wickens, Christopher D.; Larish, Inge; Contorer, Aaron

    1989-01-01

    Five models that predict how performance of multiple tasks will interact in complex task scenarios are discussed. The models are shown in terms of the assumptions they make about human operator divided attention. The different assumptions about attention are then empirically validated in a multitask helicopter flight simulation. It is concluded from this simulation that the most important assumption relates to the coding of demand level of different component tasks.

  3. Dust Plume Modeling at Fort Bliss: Move-Out Operations, Combat Training and Wind Erosion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chapman, Elaine G.; Rishel, Jeremy P.; Rutz, Frederick C.

    2006-09-29

    The potential for air-quality impacts from heavy mechanized vehicles operating in the training ranges and on the unpaved main supply routes at Fort Bliss was investigated. This report details efforts by the staff of Pacific Northwest National Laboratory for the Fort Bliss Directorate of Environment in this investigation. Dust emission and dispersion from typical activities, including move outs and combat training, occurring on the installation were simulated using the atmospheric modeling system DUSTRAN. Major assumptions associated with designing specific modeling scenarios are summarized, and results from the simulations are presented.

  4. Stringent Mitigation Policy Implied By Temperature Impacts on Economic Growth

    NASA Astrophysics Data System (ADS)

    Moore, F.; Turner, D.

    2014-12-01

    Integrated assessment models (IAMs) compare the costs of greenhouse gas mitigation with damages from climate change in order to evaluate the social welfare implications of climate policy proposals and inform optimal emissions reduction trajectories. However, these models have been criticized for lacking a strong empirical basis for their damage functions, which do little to alter assumptions of sustained GDP growth, even under extreme temperature scenarios. We implement empirical estimates of temperature effects on GDP growth-rates in the Dynamic Integrated Climate and Economy (DICE) model via two pathways, total factor productivity (TFP) growth and capital depreciation. Even under optimistic adaptation assumptions, this damage specification implies that optimal climate policy involves the elimination of emissions in the near future, the stabilization of global temperature change below 2°C, and a social cost of carbon (SCC) an order of magnitude larger than previous estimates. A sensitivity analysis shows that the magnitude of growth effects, the rate of adaptation, and the dynamic interaction between damages from warming and GDP are three critical uncertainties and an important focus for future research.

  5. How the Emitted Size Distribution and Mixing State of Feldspar Affect Ice Nucleating Particles in a Global Model

    NASA Technical Reports Server (NTRS)

    Perlwitz, Jan P.; Fridlind, Ann M.; Knopf, Daniel A.; Miller, Ron L.; García-Pando, Carlos Perez

    2017-01-01

    The effect of aerosol particles on ice nucleation and, in turn, the formation of ice and mixed phase clouds is recognized as one of the largest sources of uncertainty in climate prediction. We apply an improved dust mineral specific aerosol module in the NASA GISS Earth System ModelE, which takes into account soil aggregates and their fragmentation at emission as well as the emission of large particles. We calculate ice nucleating particle concentrations from K-feldspar abundance for an active site parameterization for a range of activation temperatures and external and internal mixing assumption. We find that the globally averaged INP concentration is reduced by a factor of two to three, compared to a simple assumption on the size distribution of emitted dust minerals. The decrease can amount to a factor of five in some geographical regions. The results vary little between external and internal mixing and different activation temperatures, except for the coldest temperatures. In the sectional size distribution, the size range 24 micrometer contributes the largest INP number.

  6. How the Emitted Size Distribution and Mixing State of Feldspar Affect Ice Nucleating Particles in a Global Model

    NASA Astrophysics Data System (ADS)

    Perlwitz, J. P.; Fridlind, A. M.; Knopf, D. A.; Miller, R. L.; Pérez García-Pando, C.

    2017-12-01

    The effect of aerosol particles on ice nucleation and, in turn, the formation of ice and mixed phase clouds is recognized as one of the largest sources of uncertainty in climate prediction. We apply an improved dust mineral specific aerosol module in the NASA GISS Earth System ModelE, which takes into account soil aggregates and their fragmentation at emission as well as the emission of large particles. We calculate ice nucleating particle concentrations from K-feldspar abundance for an active site parameterization for a range of activation temperatures and external and internal mixing assumption. We find that the globally averaged INP concentration is reduced by a factor of two to three, compared to a simple assumption on the size distribution of emitted dust minerals. The decrease can amount to a factor of five in some geographical regions. The results vary little between external and internal mixing and different activation temperatures, except for the coldest temperatures. In the sectional size distribution, the size range 2-4 μm contributes the largest INP number.

  7. Role of mathematical models in assessment of risk and in attempts to define management strategy.

    PubMed

    Flamm, W G; Winbush, J S

    1984-06-01

    Risk assessment of food-borne carcinogens is becoming a common practice at FDA. Actual risk is not being estimated, only the upper limit of risk. The risk assessment process involves a large number of steps and assumptions, many of which affect the numerical value estimated. The mathematical model which is to be applied is only one of the factors which affect these numerical values. To fulfill the policy objective of using the "worst plausible case" in estimating the upper limit of risk, recognition needs to be given to a proper balancing of assumptions and decisions. Interaction between risk assessors and risk managers should avoid making or giving the appearance of making specific technical decisions such as the choice of the mathematical model. The importance of this emerging field is too great to jeopardize it by inappropriately mixing scientific judgments with policy judgments. The risk manager should understand fully the points and range of uncertainty involved in arriving at the estimates of risk which must necessarily affect the choice of the policy or regulatory options available.

  8. Dissecting effects of complex mixtures: who's afraid of informative priors?

    PubMed

    Thomas, Duncan C; Witte, John S; Greenland, Sander

    2007-03-01

    Epidemiologic studies commonly investigate multiple correlated exposures, which are difficult to analyze appropriately. Hierarchical modeling provides a promising approach for analyzing such data by adding a higher-level structure or prior model for the exposure effects. This prior model can incorporate additional information on similarities among the correlated exposures and can be parametric, semiparametric, or nonparametric. We discuss the implications of applying these models and argue for their expanded use in epidemiology. While a prior model adds assumptions to the conventional (first-stage) model, all statistical methods (including conventional methods) make strong intrinsic assumptions about the processes that generated the data. One should thus balance prior modeling assumptions against assumptions of validity, and use sensitivity analyses to understand their implications. In doing so - and by directly incorporating into our analyses information from other studies or allied fields - we can improve our ability to distinguish true causes of disease from noise and bias.

  9. A longitudinal test of the demand-control model using specific job demands and specific job control.

    PubMed

    de Jonge, Jan; van Vegchel, Natasja; Shimazu, Akihito; Schaufeli, Wilmar; Dormann, Christian

    2010-06-01

    Supportive studies of the demand-control (DC) model were more likely to measure specific demands combined with a corresponding aspect of control. A longitudinal test of Karasek's (Adm Sci Q. 24:285-308, 1) job strain hypothesis including specific measures of job demands and job control, and both self-report and objectively recorded well-being. Job strain hypothesis was tested among 267 health care employees from a two-wave Dutch panel survey with a 2-year time lag. Significant demand/control interactions were found for mental and emotional demands, but not for physical demands. The association between job demands and job satisfaction was positive in case of high job control, whereas this association was negative in case of low job control. In addition, the relation between job demands and psychosomatic health symptoms/sickness absence was negative in case of high job control and positive in case of low control. Longitudinal support was found for the core assumption of the DC model with specific measures of job demands and job control as well as self-report and objectively recorded well-being.

  10. Choice Inconsistencies among the Elderly: Evidence from Plan Choice in the Medicare Part D Program: Reply

    PubMed Central

    ABALUCK, JASON

    2017-01-01

    We explore the in- and out- of sample robustness of tests for choice inconsistencies based on parameter restrictions in parametric models, focusing on tests proposed by Ketcham, Kuminoff and Powers (KKP). We argue that their non-parametric alternatives are inherently conservative with respect to detecting mistakes. We then show that our parametric model is robust to KKP’s suggested specification checks, and that comprehensive goodness of fit measures perform better with our model than the expected utility model. Finally, we explore the robustness of our 2011 results to alternative normative assumptions highlighting the role of brand fixed effects and unobservable characteristics. PMID:29170561

  11. A Patient-Specific Foot Model for the Estimate of Ankle Joint Forces in Patients with Juvenile Idiopathic Arthritis.

    PubMed

    Prinold, Joe A I; Mazzà, Claudia; Di Marco, Roberto; Hannah, Iain; Malattia, Clara; Magni-Manzoni, Silvia; Petrarca, Maurizio; Ronchetti, Anna B; Tanturri de Horatio, Laura; van Dijkhuizen, E H Pieter; Wesarg, Stefan; Viceconti, Marco

    2016-01-01

    Juvenile idiopathic arthritis (JIA) is the leading cause of childhood disability from a musculoskeletal disorder. It generally affects large joints such as the knee and the ankle, often causing structural damage. Different factors contribute to the damage onset, including altered joint loading and other mechanical factors, associated with pain and inflammation. The prediction of patients' joint loading can hence be a valuable tool in understanding the disease mechanisms involved in structural damage progression. A number of lower-limb musculoskeletal models have been proposed to analyse the hip and knee joints, but juvenile models of the foot are still lacking. This paper presents a modelling pipeline that allows the creation of juvenile patient-specific models starting from lower limb kinematics and foot and ankle MRI data. This pipeline has been applied to data from three children with JIA and the importance of patient-specific parameters and modelling assumptions has been tested in a sensitivity analysis focused on the variation of the joint reaction forces. This analysis highlighted the criticality of patient-specific definition of the ankle joint axes and location of the Achilles tendon insertions. Patient-specific detection of the Tibialis Anterior, Tibialis Posterior, and Peroneus Longus origins and insertions were also shown to be important.

  12. A Unimodal Model for Double Observer Distance Sampling Surveys.

    PubMed

    Becker, Earl F; Christ, Aaron M

    2015-01-01

    Distance sampling is a widely used method to estimate animal population size. Most distance sampling models utilize a monotonically decreasing detection function such as a half-normal. Recent advances in distance sampling modeling allow for the incorporation of covariates into the distance model, and the elimination of the assumption of perfect detection at some fixed distance (usually the transect line) with the use of double-observer models. The assumption of full observer independence in the double-observer model is problematic, but can be addressed by using the point independence assumption which assumes there is one distance, the apex of the detection function, where the 2 observers are assumed independent. Aerially collected distance sampling data can have a unimodal shape and have been successfully modeled with a gamma detection function. Covariates in gamma detection models cause the apex of detection to shift depending upon covariate levels, making this model incompatible with the point independence assumption when using double-observer data. This paper reports a unimodal detection model based on a two-piece normal distribution that allows covariates, has only one apex, and is consistent with the point independence assumption when double-observer data are utilized. An aerial line-transect survey of black bears in Alaska illustrate how this method can be applied.

  13. Temperature-dependent daily variability of precipitable water in special sensor microwave/imager observations

    NASA Technical Reports Server (NTRS)

    Gutowski, William J.; Lindemulder, Elizabeth A.; Jovaag, Kari

    1995-01-01

    We use retrievals of atmospheric precipitable water from satellite microwave observations and analyses of near-surface temperature to examine the relationship between these two fields on daily and longer time scales. The retrieval technique producing the data used here is most effective over the open ocean, so the analysis focuses on the southern hemisphere's extratropics, which have an extensive ocean surface. For both the total and the eddy precipitable water fields, there is a close correspondence between local variations in the precipitable water and near-surface temperature. The correspondence appears particularly strong for synoptic and planetary scale transient eddies. More specifically, the results support a typical modeling assumption that transient eddy moisture fields are proportional to transient eddy temperature fields under the assumption f constant relative humidity.

  14. A simple implementation of a normal mixture approach to differential gene expression in multiclass microarrays.

    PubMed

    McLachlan, G J; Bean, R W; Jones, L Ben-Tovim

    2006-07-01

    An important problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. We provide a straightforward and easily implemented method for estimating the posterior probability that an individual gene is null. The problem can be expressed in a two-component mixture framework, using an empirical Bayes approach. Current methods of implementing this approach either have some limitations due to the minimal assumptions made or with more specific assumptions are computationally intensive. By converting to a z-score the value of the test statistic used to test the significance of each gene, we propose a simple two-component normal mixture that models adequately the distribution of this score. The usefulness of our approach is demonstrated on three real datasets.

  15. Osmotic Transport across Cell Membranes in Nondilute Solutions: A New Nondilute Solute Transport Equation

    PubMed Central

    Elmoazzen, Heidi Y.; Elliott, Janet A.W.; McGann, Locksley E.

    2009-01-01

    The fundamental physical mechanisms of water and solute transport across cell membranes have long been studied in the field of cell membrane biophysics. Cryobiology is a discipline that requires an understanding of osmotic transport across cell membranes under nondilute solution conditions, yet many of the currently-used transport formalisms make limiting dilute solution assumptions. While dilute solution assumptions are often appropriate under physiological conditions, they are rarely appropriate in cryobiology. The first objective of this article is to review commonly-used transport equations, and the explicit and implicit assumptions made when using the two-parameter and the Kedem-Katchalsky formalisms. The second objective of this article is to describe a set of transport equations that do not make the previous dilute solution or near-equilibrium assumptions. Specifically, a new nondilute solute transport equation is presented. Such nondilute equations are applicable to many fields including cryobiology where dilute solution conditions are not often met. An illustrative example is provided. Utilizing suitable transport equations that fit for two permeability coefficients, fits were as good as with the previous three-parameter model (which includes the reflection coefficient, σ). There is less unexpected concentration dependence with the nondilute transport equations, suggesting that some of the unexpected concentration dependence of permeability is due to the use of inappropriate transport equations. PMID:19348741

  16. Integrated Medical Model (IMM) 4.0 Enhanced Functionalities

    NASA Technical Reports Server (NTRS)

    Young, M.; Keenan, A. B.; Saile, L.; Boley, L. A.; Walton, M. E.; Shah, R. V.; Kerstman, E. L.; Myers, J. G.

    2015-01-01

    The Integrated Medical Model is a probabilistic simulation model that uses input data on 100 medical conditions to simulate expected medical events, the resources required to treat, and the resulting impact to the mission for specific crew and mission characteristics. The newest development version of IMM, IMM v4.0, adds capabilities that remove some of the conservative assumptions that underlie the current operational version, IMM v3. While IMM v3 provides the framework to simulate whether a medical event occurred, IMMv4 also simulates when the event occurred during a mission timeline. This allows for more accurate estimation of mission time lost and resource utilization. In addition to the mission timeline, IMMv4.0 features two enhancements that address IMM v3 assumptions regarding medical event treatment. Medical events in IMMv3 are assigned the untreated outcome if any resource required to treat the event was unavailable. IMMv4 allows for partially treated outcomes that are proportional to the amount of required resources available, thus removing the dichotomous treatment assumption. An additional capability IMMv4 is to use an alternative medical resource when the primary resource assigned to the condition is depleted, more accurately reflecting the real-world system. The additional capabilities defining IMM v4.0the mission timeline, partial treatment, and alternate drug result in more realistic predicted mission outcomes. The primary model outcomes of IMM v4.0 for the ISS6 mission, including mission time lost, probability of evacuation, and probability of loss of crew life, are be compared to those produced by the current operational version of IMM to showcase enhanced prediction capabilities.

  17. Heuristics and Biases in Military Decision Making

    DTIC Science & Technology

    2010-10-01

    rationality and is based on a linear, step-based model that generates a specific course of action and is useful for the examination of problems that...exhibit stability and are underpinned by assumptions of “technical- rationality .”5 The Army values MDMP as the sanctioned approach for solving...theory) which sought to describe human behavior as a rational maximization of cost-benefit decisions, Kahne- man and Tversky provided a simple

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valin, Hugo; Sands, Ronald; van der Mensbrugghe, Dominique

    Understanding the capacity of agricultural systems to feed the world population under climate change requires a good prospective vision on the future development of food demand. This paper reviews modeling approaches from ten global economic models participating to the AgMIP project, in particular the demand function chosen and the set of parameters used. We compare food demand projections at the horizon 2050 for various regions and agricultural products under harmonized scenarios. Depending on models, we find for a business as usual scenario (SSP2) an increase in food demand of 59-98% by 2050, slightly higher than FAO projection (54%). The prospectivemore » for animal calories is particularly uncertain with a range of 61-144%, whereas FAO anticipates an increase by 76%. The projections reveal more sensitive to socio-economic assumptions than to climate change conditions or bioenergy development. When considering a higher population lower economic growth world (SSP3), consumption per capita drops by 9% for crops and 18% for livestock. Various assumptions on climate change in this exercise do not lead to world calorie losses greater than 6%. Divergences across models are however notable, due to differences in demand system, income elasticities specification, and response to price change in the baseline.« less

  19. Analyzing the impact of modeling choices and assumptions in compartmental epidemiological models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nutaro, James J.; Pullum, Laura L.; Ramanathan, Arvind

    In this study, computational models have become increasingly used as part of modeling, predicting, and understanding how infectious diseases spread within large populations. These models can be broadly classified into differential equation-based models (EBM) and agent-based models (ABM). Both types of models are central in aiding public health officials design intervention strategies in case of large epidemic outbreaks. We examine these models in the context of illuminating their hidden assumptions and the impact these may have on the model outcomes. Very few ABM/EBMs are evaluated for their suitability to address a particular public health concern, and drawing relevant conclusions aboutmore » their suitability requires reliable and relevant information regarding the different modeling strategies and associated assumptions. Hence, there is a need to determine how the different modeling strategies, choices of various parameters, and the resolution of information for EBMs and ABMs affect outcomes, including predictions of disease spread. In this study, we present a quantitative analysis of how the selection of model types (i.e., EBM vs. ABM), the underlying assumptions that are enforced by model types to model the disease propagation process, and the choice of time advance (continuous vs. discrete) affect the overall outcomes of modeling disease spread. Our study reveals that the magnitude and velocity of the simulated epidemic depends critically on the selection of modeling principles, various assumptions of disease process, and the choice of time advance.« less

  20. Analyzing the impact of modeling choices and assumptions in compartmental epidemiological models

    DOE PAGES

    Nutaro, James J.; Pullum, Laura L.; Ramanathan, Arvind; ...

    2016-05-01

    In this study, computational models have become increasingly used as part of modeling, predicting, and understanding how infectious diseases spread within large populations. These models can be broadly classified into differential equation-based models (EBM) and agent-based models (ABM). Both types of models are central in aiding public health officials design intervention strategies in case of large epidemic outbreaks. We examine these models in the context of illuminating their hidden assumptions and the impact these may have on the model outcomes. Very few ABM/EBMs are evaluated for their suitability to address a particular public health concern, and drawing relevant conclusions aboutmore » their suitability requires reliable and relevant information regarding the different modeling strategies and associated assumptions. Hence, there is a need to determine how the different modeling strategies, choices of various parameters, and the resolution of information for EBMs and ABMs affect outcomes, including predictions of disease spread. In this study, we present a quantitative analysis of how the selection of model types (i.e., EBM vs. ABM), the underlying assumptions that are enforced by model types to model the disease propagation process, and the choice of time advance (continuous vs. discrete) affect the overall outcomes of modeling disease spread. Our study reveals that the magnitude and velocity of the simulated epidemic depends critically on the selection of modeling principles, various assumptions of disease process, and the choice of time advance.« less

  1. Evaluation of a Method for Estimating Retinal Ganglion Cell Counts Using Visual Fields and Optical Coherence Tomography

    PubMed Central

    Raza, Ali S.; Hood, Donald C.

    2015-01-01

    Purpose. To evaluate the accuracy and generalizability of a published model that derives estimates of retinal ganglion cell (RGC) counts and relates structural and functional changes due to glaucoma. Methods. Both the Harwerth et al. nonlinear model (H-NLM) and the Hood and Kardon linear model (HK-LM) were applied to an independent dataset of frequency-domain optical coherence tomography and visual fields, consisting of 48 eyes of 48 healthy controls, 100 eyes of 77 glaucoma patients and suspects, and 18 eyes of 14 nonarteritic anterior ischemic optic neuropathy (ION) patients with severe vision loss. Using the coefficient of determination R2, the models were compared while keeping constant the topographic maps, specifically a map by Garway-Heath et al. and a separate map by Harwerth et al., which relate sensitivity test stimulus locations to corresponding regions around the optic disc. Additionally, simulations were used to evaluate the assumptions of the H-NLM. Results. Although the predictions of the HK-LM with the anatomically-derived Garway-Heath et al. map were reasonably good (R2 = 0.31–0.64), the predictions of the H-NLM were poor (R2 < 0) regardless of the map used. Furthermore, simulations of the H-NLM yielded results that differed substantially from RGC estimates based on histology from human subjects. Finally, the value-added of factors increasing the relative complexity of the H-NLM, such as assumptions regarding age- and stage-dependent corrections to structural measures, was unclear. Conclusions. Several of the assumptions underlying the H-NLM should be revisited. Studies and models relying on the RGC estimates of the H-NLM should be interpreted with caution. PMID:25604684

  2. Cost-effective management alternatives for Snake River Chinook salmon: a biological-economic synthesis.

    PubMed

    Halsing, David L; Moore, Michael R

    2008-04-01

    The mandate to increase endangered salmon populations in the Columbia River Basin of North America has created a complex, controversial resource-management issue. We constructed an integrated assessment model as a tool for analyzing biological-economic trade-offs in recovery of Snake River spring- and summer-run chinook salmon (Oncorhynchus tshawytscha). We merged 3 frameworks: a salmon-passage model to predict migration and survival of smolts; an age-structured matrix model to predict long-term population growth rates of salmon stocks; and a cost-effectiveness analysis to determine a set of least-cost management alternatives for achieving particular population growth rates. We assessed 6 individual salmon-management measures and 76 management alternatives composed of one or more measures. To reflect uncertainty, results were derived for different assumptions of effectiveness of smolt transport around dams. Removal of an estuarine predator, the Caspian Tern (Sterna caspia), was cost-effective and generally increased long-term population growth rates regardless of transport effectiveness. Elimination of adult salmon harvest had a similar effect over a range of its cost estimates. The specific management alternatives in the cost-effective set depended on assumptions about transport effectiveness. On the basis of recent estimates of smolt transport effectiveness, alternatives that discontinued transportation or breached dams were prevalent in the cost-effective set, whereas alternatives that maximized transportation dominated if transport effectiveness was relatively high. More generally, the analysis eliminated 80-90% of management alternatives from the cost-effective set. Application of our results to salmon management is limited by data availability and model assumptions, but these limitations can help guide research that addresses critical uncertainties and information. Our results thus demonstrate that linking biology and economics through integrated models can provide valuable tools for science-based policy and management.

  3. Cost-effective management alternatives for Snake river chinook salmon: A biological-economic synthesis

    USGS Publications Warehouse

    Halsing, D.L.; Moore, M.R.

    2008-01-01

    The mandate to increase endangered salmon populations in the Columbia River Basin of North America has created a complex, controversial resource-management issue. We constructed an integrated assessment model as a tool for analyzing biological-economic trade-offs in recovery of Snake River spring- and summer-run chinook salmon (Oncorhynchus tshawytscha). We merged 3 frameworks: a salmon-passage model to predict migration and survival of smolts; an age-structured matrix model to predict long-term population growth rates of salmon stocks; and a cost-effectiveness analysis to determine a set of least-cost management alternatives for achieving particular population growth rates. We assessed 6 individual salmon-management measures and 76 management alternatives composed of one or more measures. To reflect uncertainty, results were derived for different assumptions of effectiveness of smolt transport around dams. Removal of an estuarine predator, the Caspian Tern (Sterna caspia), was cost-effective and generally increased long-term population growth rates regardless of transport effectiveness. Elimination of adult salmon harvest had a similar effect over a range of its cost estimates. The specific management alternatives in the cost-effective set depended on assumptions about transport effectiveness. On the basis of recent estimates of smolt transport effectiveness, alternatives that discontinued transportation or breached dams were prevalent in the cost-effective set, whereas alternatives that maximized transportation dominated if transport effectiveness was relatively high. More generally, the analysis eliminated 80-90% of management alternatives from the cost-effective set. Application of our results to salmon management is limited by data availability and model assumptions, but these limitations can help guide research that addresses critical uncertainties and information. Our results thus demonstrate that linking biology and economics through integrated models can provide valuable tools for science-based policy and management.

  4. Analysis of free turbulent shear flows by numerical methods

    NASA Technical Reports Server (NTRS)

    Korst, H. H.; Chow, W. L.; Hurt, R. F.; White, R. A.; Addy, A. L.

    1973-01-01

    Studies are described in which the effort was essentially directed to classes of problems where the phenomenologically interpreted effective transport coefficients could be absorbed by, and subsequently extracted from (by comparison with experimental data), appropriate coordinate transformations. The transformed system of differential equations could then be solved without further specifications or assumptions by numerical integration procedures. An attempt was made to delineate different regimes for which specific eddy viscosity models could be formulated. In particular, this would account for the carryover of turbulence from attached boundary layers, the transitory adjustment, and the asymptotic behavior of initially disturbed mixing regions. Such models were subsequently used in seeking solutions for the prescribed two-dimensional test cases, yielding a better insight into overall aspects of the exchange mechanisms.

  5. Teaching "Instant Experience" with Graphical Model Validation Techniques

    ERIC Educational Resources Information Center

    Ekstrøm, Claus Thorn

    2014-01-01

    Graphical model validation techniques for linear normal models are often used to check the assumptions underlying a statistical model. We describe an approach to provide "instant experience" in looking at a graphical model validation plot, so it becomes easier to validate if any of the underlying assumptions are violated.

  6. Utility-free heuristic models of two-option choice can mimic predictions of utility-stage models under many conditions

    PubMed Central

    Piantadosi, Steven T.; Hayden, Benjamin Y.

    2015-01-01

    Economists often model choices as if decision-makers assign each option a scalar value variable, known as utility, and then select the option with the highest utility. It remains unclear whether as-if utility models describe real mental and neural steps in choice. Although choices alone cannot prove the existence of a utility stage, utility transformations are often taken to provide the most parsimonious or psychologically plausible explanation for choice data. Here, we show that it is possible to mathematically transform a large set of common utility-stage two-option choice models (specifically ones in which dimensions are can be decomposed into additive functions) into a heuristic model (specifically, a dimensional prioritization heuristic) that has no utility computation stage. We then show that under a range of plausible assumptions, both classes of model predict similar neural responses. These results highlight the difficulties in using neuroeconomic data to infer the existence of a value stage in choice. PMID:25914613

  7. The effect of stochiastic technique on estimates of population viability from transition matrix models

    USGS Publications Warehouse

    Kaye, T.N.; Pyke, David A.

    2003-01-01

    Population viability analysis is an important tool for conservation biologists, and matrix models that incorporate stochasticity are commonly used for this purpose. However, stochastic simulations may require assumptions about the distribution of matrix parameters, and modelers often select a statistical distribution that seems reasonable without sufficient data to test its fit. We used data from long-term (5a??10 year) studies with 27 populations of five perennial plant species to compare seven methods of incorporating environmental stochasticity. We estimated stochastic population growth rate (a measure of viability) using a matrix-selection method, in which whole observed matrices were selected at random at each time step of the model. In addition, we drew matrix elements (transition probabilities) at random using various statistical distributions: beta, truncated-gamma, truncated-normal, triangular, uniform, or discontinuous/observed. Recruitment rates were held constant at their observed mean values. Two methods of constraining stage-specific survival to a??100% were also compared. Different methods of incorporating stochasticity and constraining matrix column sums interacted in their effects and resulted in different estimates of stochastic growth rate (differing by up to 16%). Modelers should be aware that when constraining stage-specific survival to 100%, different methods may introduce different levels of bias in transition element means, and when this happens, different distributions for generating random transition elements may result in different viability estimates. There was no species effect on the results and the growth rates derived from all methods were highly correlated with one another. We conclude that the absolute value of population viability estimates is sensitive to model assumptions, but the relative ranking of populations (and management treatments) is robust. Furthermore, these results are applicable to a range of perennial plants and possibly other life histories.

  8. Can we predict ectotherm responses to climate change using thermal performance curves and body temperatures?

    PubMed

    Sinclair, Brent J; Marshall, Katie E; Sewell, Mary A; Levesque, Danielle L; Willett, Christopher S; Slotsbo, Stine; Dong, Yunwei; Harley, Christopher D G; Marshall, David J; Helmuth, Brian S; Huey, Raymond B

    2016-11-01

    Thermal performance curves (TPCs), which quantify how an ectotherm's body temperature (T b ) affects its performance or fitness, are often used in an attempt to predict organismal responses to climate change. Here, we examine the key - but often biologically unreasonable - assumptions underlying this approach; for example, that physiology and thermal regimes are invariant over ontogeny, space and time, and also that TPCs are independent of previously experienced T b. We show how a critical consideration of these assumptions can lead to biologically useful hypotheses and experimental designs. For example, rather than assuming that TPCs are fixed during ontogeny, one can measure TPCs for each major life stage and incorporate these into stage-specific ecological models to reveal the life stage most likely to be vulnerable to climate change. Our overall goal is to explicitly examine the assumptions underlying the integration of TPCs with T b , to develop a framework within which empiricists can place their work within these limitations, and to facilitate the application of thermal physiology to understanding the biological implications of climate change. © 2016 John Wiley & Sons Ltd/CNRS.

  9. Neural models on temperature regulation for cold-stressed animals

    NASA Technical Reports Server (NTRS)

    Horowitz, J. M.

    1975-01-01

    The present review evaluates several assumptions common to a variety of current models for thermoregulation in cold-stressed animals. Three areas covered by the models are discussed: signals to and from the central nervous system (CNS), portions of the CNS involved, and the arrangement of neurons within networks. Assumptions in each of these categories are considered. The evaluation of the models is based on the experimental foundations of the assumptions. Regions of the nervous system concerned here include the hypothalamus, the skin, the spinal cord, the hippocampus, and the septal area of the brain.

  10. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE PAGES

    Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...

    2017-11-08

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  11. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Xin; Garikapati, Venu M.; You, Daehyun

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  12. An evaluation of complementary relationship assumptions

    NASA Astrophysics Data System (ADS)

    Pettijohn, J. C.; Salvucci, G. D.

    2004-12-01

    Complementary relationship (CR) models, based on Bouchet's (1963) somewhat heuristic CR hypothesis, are advantageous in their sole reliance on readily available climatological data. While Bouchet's CR hypothesis requires a number of questionable assumptions, CR models have been evaluated on variable time and length scales with relative success. Bouchet's hypothesis is grounded on the assumption that a change in potential evapotranspiration (Ep}) is equal and opposite in sign to a change in actual evapotranspiration (Ea), i.e., -dEp / dEa = 1. In his mathematical rationalization of the CR, Morton (1965) similarly assumes that a change in potential sensible heat flux (Hp) is equal and opposite in sign to a change in actual sensible heat flux (Ha), i.e., -dHp / dHa = 1. CR models have maintained these assumptions while focusing on defining Ep and equilibrium evapotranspiration (Epo). We question Bouchet and Morton's aforementioned assumptions by revisiting CR derivation in light of a proposed variable, φ = -dEp/dEa. We evaluate φ in a simplified Monin Obukhov surface similarity framework and demonstrate how previous error in the application of CR models may be explained in part by previous assumptions that φ =1. Finally, we discuss the various time and length scales to which φ may be evaluated.

  13. Psychiatry's next top model: cause for a re-think on drug models of psychosis and other psychiatric disorders.

    PubMed

    Carhart-Harris, R L; Brugger, S; Nutt, D J; Stone, J M

    2013-09-01

    Despite the widespread application of drug modelling in psychiatric research, the relative value of different models has never been formally compared in the same analysis. Here we compared the effects of five drugs (cannabis, psilocybin, amphetamine, ketamine and alcohol) in relation to psychiatric symptoms in a two-part subjective analysis. In the first part, mental health professionals associated statements referring to specific experiences, for example 'I don't bother to get out of bed', to one or more psychiatric symptom clusters, for example depression and negative psychotic symptoms. This measured the specificity of an experience for a particular disorder. In the second part, individuals with personal experience with each of the above-listed drugs were asked how reliably each drug produced the experiences listed in part 1, both acutely and sub-acutely. Part 1 failed to find any experiences that were specific for negative or cognitive psychotic symptoms over depression. The best model of positive symptoms was psilocybin and the best models overall were the acute alcohol and amphetamine models of mania. These results challenge current assumptions about drug models and motivate further research on this understudied area.

  14. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models: 2. Laboratory earthquakes

    NASA Astrophysics Data System (ADS)

    Rubinstein, Justin L.; Ellsworth, William L.; Beeler, Nicholas M.; Kilgore, Brian D.; Lockner, David A.; Savage, Heather M.

    2012-02-01

    The behavior of individual stick-slip events observed in three different laboratory experimental configurations is better explained by a "memoryless" earthquake model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. We make similar findings in the companion manuscript for the behavior of natural repeating earthquakes. Taken together, these results allow us to conclude that the predictions of a characteristic earthquake model that assumes either fixed slip or fixed recurrence interval should be preferred to the predictions of the time- and slip-predictable models for all earthquakes. Given that the fixed slip and recurrence models are the preferred models for all of the experiments we examine, we infer that in an event-to-event sense the elastic rebound model underlying the time- and slip-predictable models does not explain earthquake behavior. This does not indicate that the elastic rebound model should be rejected in a long-term-sense, but it should be rejected for short-term predictions. The time- and slip-predictable models likely offer worse predictions of earthquake behavior because they rely on assumptions that are too simple to explain the behavior of earthquakes. Specifically, the time-predictable model assumes a constant failure threshold and the slip-predictable model assumes that there is a constant minimum stress. There is experimental and field evidence that these assumptions are not valid for all earthquakes.

  15. Problems in the Definition, Interpretation, and Evaluation of Genetic Heterogeneity

    PubMed Central

    Whittemore, Alice S.; Halpern, Jerry

    2001-01-01

    Suppose that we wish to classify families with multiple cases of disease into one of three categories: those that segregate mutations of a gene of interest, those which segregate mutations of other genes, and those whose disease is due to nonhereditary factors or chance. Among families in the first two categories (the hereditary families), we wish to estimate the proportion, p, of families that segregate mutations of the gene of interest. Although this proportion is a commonly accepted concept, it is well defined only with an unambiguous definition of “family.” Even then, extraneous factors such as family sizes and structures can cause p to vary across different populations and, within a population, to be estimated differently by different studies. Restrictive assumptions about the disease are needed, in order to avoid this undesirable variation. The assumptions require that mutations of all disease-causing genes (i) have no effect on family size, (ii) have very low frequencies, and (iii) have penetrances that satisfy certain constraints. Despite the unverifiability of these assumptions, linkage studies often invoke them to estimate p, using the admixture likelihood introduced by Smith and discussed by Ott. We argue against this common practice, because (1) it also requires the stronger assumption of equal penetrances for all etiologically relevant genes; (2) even if all assumptions are met, estimates of p are sensitive to misspecification of the unknown phenocopy rate; (3) even if all the necessary assumptions are met and the phenocopy rate is correctly specified, estimates of p that are obtained by linkage programs such as HOMOG and GENEHUNTER are based on the wrong likelihood and therefore are biased in the presence of phenocopies. We show how to correct these estimates; but, nevertheless, we do not recommend the use of parametric heterogeneity models in linkage analysis, even merely as a tool for increasing the statistical power to detect linkage. This is because the assumptions required by these models cannot be verified, and their violation could actually decrease power. Instead, we suggest that estimation of p be postponed until the relevant genes have been identified. Then their frequencies and penetrances can be estimated on the basis of population-based samples and can be used to obtain more-robust estimates of p for specific populations. PMID:11170893

  16. Informed Source Separation: A Bayesian Tutorial

    NASA Technical Reports Server (NTRS)

    Knuth, Kevin H.

    2005-01-01

    Source separation problems are ubiquitous in the physical sciences; any situation where signals are superimposed calls for source separation to estimate the original signals. In h s tutorial I will discuss the Bayesian approach to the source separation problem. This approach has a specific advantage in that it requires the designer to explicitly describe the signal model in addition to any other information or assumptions that go into the problem description. This leads naturally to the idea of informed source separation, where the algorithm design incorporates relevant information about the specific problem. This approach promises to enable researchers to design their own high-quality algorithms that are specifically tailored to the problem at hand.

  17. Random Dynamics

    NASA Astrophysics Data System (ADS)

    Bennett, D. L.; Brene, N.; Nielsen, H. B.

    1987-01-01

    The goal of random dynamics is the derivation of the laws of Nature as we know them (standard model) from inessential assumptions. The inessential assumptions made here are expressed as sets of general models at extremely high energies: gauge glass and spacetime foam. Both sets of models lead tentatively to the standard model.

  18. Distinguishing Between Convergent Evolution and Violation of the Molecular Clock for Three Taxa.

    PubMed

    Mitchell, Jonathan D; Sumner, Jeremy G; Holland, Barbara R

    2018-05-18

    We give a non-technical introduction to convergence-divergence models, a new modeling approach for phylogenetic data that allows for the usual divergence of lineages after lineage-splitting but also allows for taxa to converge, i.e. become more similar over time. By examining the 3-taxon case in some detail we illustrate that phylogeneticists have been "spoiled" in the sense of not having to think about the structural parameters in their models by virtue of the strong assumption that evolution is tree-like. We show that there are not always good statistical reasons to prefer the usual class of tree-like models over more general convergence-divergence models. Specifically we show many 3-taxon data sets can be equally well explained by supposing violation of the molecular clock due to change in the rate of evolution along different edges, or by keeping the assumption of a constant rate of evolution but instead assuming that evolution is not a purely divergent process. Given the abundance of evidence that evolution is not strictly tree-like, our discussion is an illustration that as phylogeneticists we need to think clearly about the structural form of the models we use. For cases with four taxa we show that there will be far greater ability to distinguish models with convergence from non-clock-like tree models.

  19. Proposed best practice for projects that involve modelling and simulation.

    PubMed

    O'Kelly, Michael; Anisimov, Vladimir; Campbell, Chris; Hamilton, Sinéad

    2017-03-01

    Modelling and simulation has been used in many ways when developing new treatments. To be useful and credible, it is generally agreed that modelling and simulation should be undertaken according to some kind of best practice. A number of authors have suggested elements required for best practice in modelling and simulation. Elements that have been suggested include the pre-specification of goals, assumptions, methods, and outputs. However, a project that involves modelling and simulation could be simple or complex and could be of relatively low or high importance to the project. It has been argued that the level of detail and the strictness of pre-specification should be allowed to vary, depending on the complexity and importance of the project. This best practice document does not prescribe how to develop a statistical model. Rather, it describes the elements required for the specification of a project and requires that the practitioner justify in the specification the omission of any of the elements and, in addition, justify the level of detail provided about each element. This document is an initiative of the Special Interest Group for modelling and simulation. The Special Interest Group for modelling and simulation is a body open to members of Statisticians in the Pharmaceutical Industry and the European Federation of Statisticians in the Pharmaceutical Industry. Examples of a very detailed specification and a less detailed specification are included as appendices. Copyright © 2016 John Wiley & Sons, Ltd.

  20. No Control Genes Required: Bayesian Analysis of qRT-PCR Data

    PubMed Central

    Matz, Mikhail V.; Wright, Rachel M.; Scott, James G.

    2013-01-01

    Background Model-based analysis of data from quantitative reverse-transcription PCR (qRT-PCR) is potentially more powerful and versatile than traditional methods. Yet existing model-based approaches cannot properly deal with the higher sampling variances associated with low-abundant targets, nor do they provide a natural way to incorporate assumptions about the stability of control genes directly into the model-fitting process. Results In our method, raw qPCR data are represented as molecule counts, and described using generalized linear mixed models under Poisson-lognormal error. A Markov Chain Monte Carlo (MCMC) algorithm is used to sample from the joint posterior distribution over all model parameters, thereby estimating the effects of all experimental factors on the expression of every gene. The Poisson-based model allows for the correct specification of the mean-variance relationship of the PCR amplification process, and can also glean information from instances of no amplification (zero counts). Our method is very flexible with respect to control genes: any prior knowledge about the expected degree of their stability can be directly incorporated into the model. Yet the method provides sensible answers without such assumptions, or even in the complete absence of control genes. We also present a natural Bayesian analogue of the “classic” analysis, which uses standard data pre-processing steps (logarithmic transformation and multi-gene normalization) but estimates all gene expression changes jointly within a single model. The new methods are considerably more flexible and powerful than the standard delta-delta Ct analysis based on pairwise t-tests. Conclusions Our methodology expands the applicability of the relative-quantification analysis protocol all the way to the lowest-abundance targets, and provides a novel opportunity to analyze qRT-PCR data without making any assumptions concerning target stability. These procedures have been implemented as the MCMC.qpcr package in R. PMID:23977043

  1. Maternal serum alpha-fetoprotein (MSAFP) patient-specific risk reporting: its use and misuse.

    PubMed

    Macri, J N; Kasturi, R V; Krantz, D A; Cook, E J; Larsen, J W

    1990-03-01

    Fundamental to maternal serum alpha-fetoprotein screening is the clinical utility of the laboratory report. It follows that the scientific form of expression in that report is vital. Professional societies concur that patient-specific risk reporting is the preferred form. However, some intermediate steps being taken to calculate patient-specific risks are invalid because of the erroneous assumption that multiples of the median (MoMs) represent an interlaboratory common currency. The numerous methods by which MoMs may be calculated belie the foregoing assumption.

  2. Disambiguating brain functional connectivity.

    PubMed

    Duff, Eugene P; Makin, Tamar; Cottaar, Michiel; Smith, Stephen M; Woolrich, Mark W

    2018-06-01

    Functional connectivity (FC) analyses of correlations of neural activity are used extensively in neuroimaging and electrophysiology to gain insights into neural interactions. However, analyses assessing changes in correlation fail to distinguish effects produced by sources as different as changes in neural signal amplitudes or noise levels. This ambiguity substantially diminishes the value of FC for inferring system properties and clinical states. Network modelling approaches may avoid ambiguities, but require specific assumptions. We present an enhancement to FC analysis with improved specificity of inferences, minimal assumptions and no reduction in flexibility. The Additive Signal Change (ASC) approach characterizes FC changes into certain prevalent classes of signal change that involve the input of additional signal to existing activity. With FMRI data, the approach reveals a rich diversity of signal changes underlying measured changes in FC, suggesting that it could clarify our current understanding of FC changes in many contexts. The ASC method can also be used to disambiguate other measures of dependency, such as regression and coherence, providing a flexible tool for the analysis of neural data. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alani, Ivo; Santillán, Osvaldo P., E-mail: firenzecita@hotmail.com, E-mail: osantil@dm.uba.ar

    In the present work some generalizations of the Hawking singularity theorems in the context of f ( R ) theories are presented. The main assumptions are: the matter fields stress energy tensor satisfies the condition ( T {sub ij} −( g {sub ij} /2) T ) k {sup i} k {sup j} ≥ 0 for any generic unit time like field k {sup i} ; the scalaron takes bounded positive values during its evolution and the resulting space time is globally hyperbolic. Then, if there exist a Cauchy hyper-surface Σ for which the expansion parameter θ of the geodesic congruencemore » emanating orthogonally from Σ satisfies some specific bounds, then the resulting space time is geodesically incomplete. Some mathematical results of reference [92] are very important for proving this. The generalized theorems presented here apply directly for some specific models such as the Hu-Sawicki or Starobinsky ones [27,38]. For other scenarios, some extra assumptions should be implemented in order to have a geodesically incomplete space time. The hypothesis considered in this text are sufficient, but not necessary. In other words, their negation does not imply that a singularity is absent.« less

  4. Modeling of Heat Transfer in Rooms in the Modelica "Buildings" Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wetter, Michael; Zuo, Wangda; Nouidui, Thierry Stephane

    This paper describes the implementation of the room heat transfer model in the free open-source Modelica \\Buildings" library. The model can be used as a single room or to compose a multizone building model. We discuss how the model is decomposed into submodels for the individual heat transfer phenomena. We also discuss the main physical assumptions. The room model can be parameterized to use different modeling assumptions, leading to linear or non-linear differential algebraic systems of equations. We present numerical experiments that show how these assumptions affect computing time and accuracy for selected cases of the ANSI/ASHRAE Standard 140- 2007more » envelop validation tests.« less

  5. A model of interval timing by neural integration.

    PubMed

    Simen, Patrick; Balci, Fuat; de Souza, Laura; Cohen, Jonathan D; Holmes, Philip

    2011-06-22

    We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes, that correlations among them can be largely cancelled by balancing excitation and inhibition, that neural populations can act as integrators, and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys, and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule's predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior.

  6. Weighted least squares techniques for improved received signal strength based localization.

    PubMed

    Tarrío, Paula; Bernardos, Ana M; Casar, José R

    2011-01-01

    The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network). The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling.

  7. Constraining the physics of jet quenching

    NASA Astrophysics Data System (ADS)

    Renk, Thorsten

    2012-04-01

    Hard probes in the context of ultrarelativistic heavy-ion collisions represent a key class of observables studied to gain information about the QCD medium created in such collisions. However, in practice, the so-called jet tomography has turned out to be more difficult than expected initially. One of the major obstacles in extracting reliable tomographic information from the data is that neither the parton-medium interaction nor the medium geometry are known with great precision, and thus a difference in model assumptions in the hard perturbative Quantum Choromdynamics (pQCD) modeling can usually be compensated by a corresponding change of assumptions in the soft bulk medium sector and vice versa. The only way to overcome this problem is to study the full systematics of combinations of parton-medium interaction and bulk medium evolution models. This work presents a meta-analysis summarizing results from a number of such systematical studies and discusses in detail how certain data sets provide specific constraints for models. Combining all available information, only a small group of models exhibiting certain characteristic features consistent with a pQCD picture of parton-medium interaction is found to be viable given the data. In this picture, the dominant mechanism is medium-induced radiation combined with a surprisingly small component of elastic energy transfer into the medium.

  8. The analysis of incontinence episodes and other count data in patients with overactive bladder by Poisson and negative binomial regression.

    PubMed

    Martina, R; Kay, R; van Maanen, R; Ridder, A

    2015-01-01

    Clinical studies in overactive bladder have traditionally used analysis of covariance or nonparametric methods to analyse the number of incontinence episodes and other count data. It is known that if the underlying distributional assumptions of a particular parametric method do not hold, an alternative parametric method may be more efficient than a nonparametric one, which makes no assumptions regarding the underlying distribution of the data. Therefore, there are advantages in using methods based on the Poisson distribution or extensions of that method, which incorporate specific features that provide a modelling framework for count data. One challenge with count data is overdispersion, but methods are available that can account for this through the introduction of random effect terms in the modelling, and it is this modelling framework that leads to the negative binomial distribution. These models can also provide clinicians with a clearer and more appropriate interpretation of treatment effects in terms of rate ratios. In this paper, the previously used parametric and non-parametric approaches are contrasted with those based on Poisson regression and various extensions in trials evaluating solifenacin and mirabegron in patients with overactive bladder. In these applications, negative binomial models are seen to fit the data well. Copyright © 2014 John Wiley & Sons, Ltd.

  9. Weighted Least Squares Techniques for Improved Received Signal Strength Based Localization

    PubMed Central

    Tarrío, Paula; Bernardos, Ana M.; Casar, José R.

    2011-01-01

    The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network). The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling. PMID:22164092

  10. The importance of being equivalent: Newton's two models of one-body motion

    NASA Astrophysics Data System (ADS)

    Pourciau, Bruce

    2004-05-01

    As an undergraduate at Cambridge, Newton entered into his "Waste Book" an assumption that we have named the Equivalence Assumption (The Younger): "If a body move progressively in some crooked line [about a center of motion] ..., [then this] crooked line may bee conceived to consist of an infinite number of streight lines. Or else in any point of the croked line the motion may bee conceived to be on in the tangent". In this assumption, Newton somewhat imprecisely describes two mathematical models, a "polygonal limit model" and a "tangent deflected model", for "one-body motion", that is, for the motion of a "body in orbit about a fixed center", and then claims that these two models are equivalent. In the first part of this paper, we study the Principia to determine how the elder Newton would more carefully describe the polygonal limit and tangent deflected models. From these more careful descriptions, we then create Equivalence Assumption (The Elder), a precise interpretation of Equivalence Assumption (The Younger) as it might have been restated by Newton, after say 1687. We then review certain portions of the Waste Book and the Principia to make the case that, although Newton never restates nor even alludes to the Equivalence Assumption after his youthful Waste Book entry, still the polygonal limit and tangent deflected models, as well as an unspoken belief in their equivalence, infuse Newton's work on orbital motion. In particular, we show that the persuasiveness of the argument for the Area Property in Proposition 1 of the Principia depends crucially on the validity of Equivalence Assumption (The Elder). After this case is made, we present the mathematical analysis required to establish the validity of the Equivalence Assumption (The Elder). Finally, to illustrate the fundamental nature of the resulting theorem, the Equivalence Theorem as we call it, we present three significant applications: we use the Equivalence Theorem first to clarify and resolve questions related to Leibniz's "polygonal model" of one-body motion; then to repair Newton's argument for the Area Property in Proposition 1; and finally to clarify and resolve questions related to the transition from impulsive to continuous forces in "De motu" and the Principia.

  11. Mathematization Competencies of Pre-Service Elementary Mathematics Teachers in the Mathematical Modelling Process

    ERIC Educational Resources Information Center

    Yilmaz, Suha; Tekin-Dede, Ayse

    2016-01-01

    Mathematization competency is considered in the field as the focus of modelling process. Considering the various definitions, the components of the mathematization competency are determined as identifying assumptions, identifying variables based on the assumptions and constructing mathematical model/s based on the relations among identified…

  12. A Taxonomy of Latent Structure Assumptions for Probability Matrix Decomposition Models.

    ERIC Educational Resources Information Center

    Meulders, Michel; De Boeck, Paul; Van Mechelen, Iven

    2003-01-01

    Proposed a taxonomy of latent structure assumptions for probability matrix decomposition (PMD) that includes the original PMD model and a three-way extension of the multiple classification latent class model. Simulation study results show the usefulness of the taxonomy. (SLD)

  13. A complete graphical criterion for the adjustment formula in mediation analysis.

    PubMed

    Shpitser, Ilya; VanderWeele, Tyler J

    2011-03-04

    Various assumptions have been used in the literature to identify natural direct and indirect effects in mediation analysis. These effects are of interest because they allow for effect decomposition of a total effect into a direct and indirect effect even in the presence of interactions or non-linear models. In this paper, we consider the relation and interpretation of various identification assumptions in terms of causal diagrams interpreted as a set of non-parametric structural equations. We show that for such causal diagrams, two sets of assumptions for identification that have been described in the literature are in fact equivalent in the sense that if either set of assumptions holds for all models inducing a particular causal diagram, then the other set of assumptions will also hold for all models inducing that diagram. We moreover build on prior work concerning a complete graphical identification criterion for covariate adjustment for total effects to provide a complete graphical criterion for using covariate adjustment to identify natural direct and indirect effects. Finally, we show that this criterion is equivalent to the two sets of independence assumptions used previously for mediation analysis.

  14. GCAM 3.0 Agriculture and Land Use: Data Sources and Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kyle, G. Page; Luckow, Patrick; Calvin, Katherine V.

    This report presents the data processing methods used in the GCAM 3.0 agriculture and land use component, starting from all source data used, and detailing all calculations and assumptions made in generating the model inputs. The report starts with a brief introduction to modeling of agriculture and land use in GCAM 3.0, and then provides documentation of the data and methods used for generating the base-year dataset and future scenario parameters assumed in the model input files. Specifically, the report addresses primary commodity production, secondary (animal) commodity production, disposition of commodities, land allocation, land carbon contents, and land values.

  15. A high fidelity real-time simulation of a small turboshaft engine

    NASA Technical Reports Server (NTRS)

    Ballin, Mark G.

    1988-01-01

    A high-fidelity component-type model and real-time digital simulation of the General Electric T700-GE-700 turboshaft engine were developed for use with current generation real-time blade-element rotor helicopter simulations. A control system model based on the specification fuel control system used in the UH-60A Black Hawk helicopter is also presented. The modeling assumptions and real-time digital implementation methods particular to the simulation of small turboshaft engines are described. The validity of the simulation is demonstrated by comparison with analysis-oriented simulations developed by the manufacturer, available test data, and flight-test time histories.

  16. Power, Revisited

    ERIC Educational Resources Information Center

    Roscigno, Vincent J.

    2011-01-01

    Power is a core theoretical construct in the field with amazing utility across substantive areas, levels of analysis and methodologies. Yet, its use along with associated assumptions--assumptions surrounding constraint vs. action and specifically organizational structure and rationality--remain problematic. In this article, and following an…

  17. A radiosity-based model to compute the radiation transfer of soil surface

    NASA Astrophysics Data System (ADS)

    Zhao, Feng; Li, Yuguang

    2011-11-01

    A good understanding of interactions of electromagnetic radiation with soil surface is important for a further improvement of remote sensing methods. In this paper, a radiosity-based analytical model for soil Directional Reflectance Factor's (DRF) distributions was developed and evaluated. The model was specifically dedicated to the study of radiation transfer for the soil surface under tillage practices. The soil was abstracted as two dimensional U-shaped or V-shaped geometric structures with periodic macroscopic variations. The roughness of the simulated surfaces was expressed as a ratio of the height to the width for the U and V-shaped structures. The assumption was made that the shadowing of soil surface, simulated by U or V-shaped grooves, has a greater influence on the soil reflectance distribution than the scattering properties of basic soil particles of silt and clay. Another assumption was that the soil is a perfectly diffuse reflector at a microscopic level, which is a prerequisite for the application of the radiosity method. This radiosity-based analytical model was evaluated by a forward Monte Carlo ray-tracing model under the same structural scenes and identical spectral parameters. The statistics of these two models' BRF fitting results for several soil structures under the same conditions showed the good agreements. By using the model, the physical mechanism of the soil bidirectional reflectance pattern was revealed.

  18. Equilibrium of Global Amphibian Species Distributions with Climate

    PubMed Central

    Munguía, Mariana; Rahbek, Carsten; Rangel, Thiago F.; Diniz-Filho, Jose Alexandre F.; Araújo, Miguel B.

    2012-01-01

    A common assumption in bioclimatic envelope modeling is that species distributions are in equilibrium with contemporary climate. A number of studies have measured departures from equilibrium in species distributions in particular regions, but such investigations were never carried out for a complete lineage across its entire distribution. We measure departures of equilibrium with contemporary climate for the distributions of the world amphibian species. Specifically, we fitted bioclimatic envelopes for 5544 species using three presence-only models. We then measured the proportion of the modeled envelope that is currently occupied by the species, as a metric of equilibrium of species distributions with climate. The assumption was that the greater the difference between modeled bioclimatic envelope and the occupied distribution, the greater the likelihood that species distribution would not be at equilibrium with contemporary climate. On average, amphibians occupied 30% to 57% of their potential distributions. Although patterns differed across regions, there were no significant differences among lineages. Species in the Neotropic, Afrotropics, Indo-Malay, and Palaearctic occupied a smaller proportion of their potential distributions than species in the Nearctic, Madagascar, and Australasia. We acknowledge that our models underestimate non equilibrium, and discuss potential reasons for the observed patterns. From a modeling perspective our results support the view that at global scale bioclimatic envelope models might perform similarly across lineages but differently across regions. PMID:22511938

  19. Incremental Learning With Selective Memory (ILSM): Towards Fast Prostate Localization for Image Guided Radiotherapy

    PubMed Central

    Gao, Yaozong; Zhan, Yiqiang

    2015-01-01

    Image-guided radiotherapy (IGRT) requires fast and accurate localization of the prostate in 3-D treatment-guided radiotherapy, which is challenging due to low tissue contrast and large anatomical variation across patients. On the other hand, the IGRT workflow involves collecting a series of computed tomography (CT) images from the same patient under treatment. These images contain valuable patient-specific information yet are often neglected by previous works. In this paper, we propose a novel learning framework, namely incremental learning with selective memory (ILSM), to effectively learn the patient-specific appearance characteristics from these patient-specific images. Specifically, starting with a population-based discriminative appearance model, ILSM aims to “personalize” the model to fit patient-specific appearance characteristics. The model is personalized with two steps: backward pruning that discards obsolete population-based knowledge and forward learning that incorporates patient-specific characteristics. By effectively combining the patient-specific characteristics with the general population statistics, the incrementally learned appearance model can localize the prostate of a specific patient much more accurately. This work has three contributions: 1) the proposed incremental learning framework can capture patient-specific characteristics more effectively, compared to traditional learning schemes, such as pure patient-specific learning, population-based learning, and mixture learning with patient-specific and population data; 2) this learning framework does not have any parametric model assumption, hence, allowing the adoption of any discriminative classifier; and 3) using ILSM, we can localize the prostate in treatment CTs accurately (DSC ∼0.89) and fast (∼4 s), which satisfies the real-world clinical requirements of IGRT. PMID:24495983

  20. A Feature-Based Approach to Modeling Protein–DNA Interactions

    PubMed Central

    Segal, Eran

    2008-01-01

    Transcription factor (TF) binding to its DNA target site is a fundamental regulatory interaction. The most common model used to represent TF binding specificities is a position specific scoring matrix (PSSM), which assumes independence between binding positions. However, in many cases, this simplifying assumption does not hold. Here, we present feature motif models (FMMs), a novel probabilistic method for modeling TF–DNA interactions, based on log-linear models. Our approach uses sequence features to represent TF binding specificities, where each feature may span multiple positions. We develop the mathematical formulation of our model and devise an algorithm for learning its structural features from binding site data. We also developed a discriminative motif finder, which discovers de novo FMMs that are enriched in target sets of sequences compared to background sets. We evaluate our approach on synthetic data and on the widely used TF chromatin immunoprecipitation (ChIP) dataset of Harbison et al. We then apply our algorithm to high-throughput TF ChIP data from mouse and human, reveal sequence features that are present in the binding specificities of mouse and human TFs, and show that FMMs explain TF binding significantly better than PSSMs. Our FMM learning and motif finder software are available at http://genie.weizmann.ac.il/. PMID:18725950

  1. Critical frontier of the Potts and percolation models on triangular-type and kagome-type lattices. II. Numerical analysis

    NASA Astrophysics Data System (ADS)

    Ding, Chengxiang; Fu, Zhe; Guo, Wenan; Wu, F. Y.

    2010-06-01

    In the preceding paper, one of us (F. Y. Wu) considered the Potts model and bond and site percolation on two general classes of two-dimensional lattices, the triangular-type and kagome-type lattices, and obtained closed-form expressions for the critical frontier with applications to various lattice models. For the triangular-type lattices Wu’s result is exact, and for the kagome-type lattices Wu’s expression is under a homogeneity assumption. The purpose of the present paper is twofold: First, an essential step in Wu’s analysis is the derivation of lattice-dependent constants A,B,C for various lattice models, a process which can be tedious. We present here a derivation of these constants for subnet networks using a computer algorithm. Second, by means of a finite-size scaling analysis based on numerical transfer matrix calculations, we deduce critical properties and critical thresholds of various models and assess the accuracy of the homogeneity assumption. Specifically, we analyze the q -state Potts model and the bond percolation on the 3-12 and kagome-type subnet lattices (n×n):(n×n) , n≤4 , for which the exact solution is not known. Our numerical determination of critical properties such as conformal anomaly and magnetic correlation length verifies that the universality principle holds. To calibrate the accuracy of the finite-size procedure, we apply the same numerical analysis to models for which the exact critical frontiers are known. The comparison of numerical and exact results shows that our numerical values are correct within errors of our finite-size analysis, which correspond to 7 or 8 significant digits. This in turn infers that the homogeneity assumption determines critical frontiers with an accuracy of 5 decimal places or higher. Finally, we also obtained the exact percolation thresholds for site percolation on kagome-type subnet lattices (1×1):(n×n) for 1≤n≤6 .

  2. Meta-analysis of diagnostic accuracy studies accounting for disease prevalence: alternative parameterizations and model selection.

    PubMed

    Chu, Haitao; Nie, Lei; Cole, Stephen R; Poole, Charles

    2009-08-15

    In a meta-analysis of diagnostic accuracy studies, the sensitivities and specificities of a diagnostic test may depend on the disease prevalence since the severity and definition of disease may differ from study to study due to the design and the population considered. In this paper, we extend the bivariate nonlinear random effects model on sensitivities and specificities to jointly model the disease prevalence, sensitivities and specificities using trivariate nonlinear random-effects models. Furthermore, as an alternative parameterization, we also propose jointly modeling the test prevalence and the predictive values, which reflect the clinical utility of a diagnostic test. These models allow investigators to study the complex relationship among the disease prevalence, sensitivities and specificities; or among test prevalence and the predictive values, which can reveal hidden information about test performance. We illustrate the proposed two approaches by reanalyzing the data from a meta-analysis of radiological evaluation of lymph node metastases in patients with cervical cancer and a simulation study. The latter illustrates the importance of carefully choosing an appropriate normality assumption for the disease prevalence, sensitivities and specificities, or the test prevalence and the predictive values. In practice, it is recommended to use model selection techniques to identify a best-fitting model for making statistical inference. In summary, the proposed trivariate random effects models are novel and can be very useful in practice for meta-analysis of diagnostic accuracy studies. Copyright 2009 John Wiley & Sons, Ltd.

  3. Effects of fish movement assumptions on the design of a marine protected area to protect an overfished stock.

    PubMed

    Cornejo-Donoso, Jorge; Einarsson, Baldvin; Birnir, Bjorn; Gaines, Steven D

    2017-01-01

    Marine Protected Areas (MPA) are important management tools shown to protect marine organisms, restore biomass, and increase fisheries yields. While MPAs have been successful in meeting these goals for many relatively sedentary species, highly mobile organisms may get few benefits from this type of spatial protection due to their frequent movement outside the protected area. The use of a large MPA can compensate for extensive movement, but testing this empirically is challenging, as it requires both large areas and sufficient time series to draw conclusions. To overcome this limitation, MPA models have been used to identify designs and predict potential outcomes, but these simulations are highly sensitive to the assumptions describing the organism's movements. Due to recent improvements in computational simulations, it is now possible to include very complex movement assumptions in MPA models (e.g. Individual Based Model). These have renewed interest in MPA simulations, which implicitly assume that increasing the detail in fish movement overcomes the sensitivity to the movement assumptions. Nevertheless, a systematic comparison of the designs and outcomes obtained under different movement assumptions has not been done. In this paper, we use an individual based model, interconnected to population and fishing fleet models, to explore the value of increasing the detail of the movement assumptions using four scenarios of increasing behavioral complexity: a) random, diffusive movement, b) aggregations, c) aggregations that respond to environmental forcing (e.g. sea surface temperature), and d) aggregations that respond to environmental forcing and are transported by currents. We then compare these models to determine how the assumptions affect MPA design, and therefore the effective protection of the stocks. Our results show that the optimal MPA size to maximize fisheries benefits increases as movement complexity increases from ~10% for the diffusive assumption to ~30% when full environment forcing was used. We also found that in cases of limited understanding of the movement dynamics of a species, simplified assumptions can be used to provide a guide for the minimum MPA size needed to effectively protect the stock. However, using oversimplified assumptions can produce suboptimal designs and lead to a density underestimation of ca. 30%; therefore, the main value of detailed movement dynamics is to provide more reliable MPA design and predicted outcomes. Large MPAs can be effective in recovering overfished stocks, protect pelagic fish and provide significant increases in fisheries yields. Our models provide a means to empirically test this spatial management tool, which theoretical evidence consistently suggests as an effective alternative to managing highly mobile pelagic stocks.

  4. Latent degradation indicators estimation and prediction: A Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Zhou, Yifan; Sun, Yong; Mathew, Joseph; Wolff, Rodney; Ma, Lin

    2011-01-01

    Asset health inspections can produce two types of indicators: (1) direct indicators (e.g. the thickness of a brake pad, and the crack depth on a gear) which directly relate to a failure mechanism; and (2) indirect indicators (e.g. the indicators extracted from vibration signals and oil analysis data) which can only partially reveal a failure mechanism. While direct indicators enable more precise references to asset health condition, they are often more difficult to obtain than indirect indicators. The state space model provides an efficient approach to estimating direct indicators by using indirect indicators. However, existing state space models to estimate direct indicators largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires fixed inspection intervals. The discrete state assumption entails discretising continuous degradation indicators, which often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This paper proposes a state space model without these assumptions. Monte Carlo-based algorithms are developed to estimate the model parameters and the remaining useful life. These algorithms are evaluated for performance using numerical simulations through MATLAB. The result shows that both the parameters and the remaining useful life are estimated accurately. Finally, the new state space model is used to process vibration and crack depth data from an accelerated test of a gearbox. During this application, the new state space model shows a better fitness result than the state space model with linear and Gaussian assumption.

  5. Examination of various turbulence models for application in liquid rocket thrust chambers

    NASA Technical Reports Server (NTRS)

    Hung, R. J.

    1991-01-01

    There is a large variety of turbulence models available. These models include direct numerical simulation, large eddy simulation, Reynolds stress/flux model, zero equation model, one equation model, two equation k-epsilon model, multiple-scale model, etc. Each turbulence model contains different physical assumptions and requirements. The natures of turbulence are randomness, irregularity, diffusivity and dissipation. The capabilities of the turbulence models, including physical strength, weakness, limitations, as well as numerical and computational considerations, are reviewed. Recommendations are made for the potential application of a turbulence model in thrust chamber and performance prediction programs. The full Reynolds stress model is recommended. In a workshop, specifically called for the assessment of turbulence models for applications in liquid rocket thrust chambers, most of the experts present were also in favor of the recommendation of the Reynolds stress model.

  6. Projections of health care expenditures as a share of the GDP: actuarial and macroeconomic approaches.

    PubMed Central

    Warshawsky, M J

    1994-01-01

    STUDY QUESTION. Can the steady increases in health care expenditures as a share of GDP projected by widely cited actuarial models be rationalized by a macroeconomic model with sensible parameters and specification? DATA SOURCES. National Income and Product Accounts, and Social Security and Health Care Financing Administration are the data sources used in parameters estimates. STUDY DESIGN. Health care expenditures as a share of gross domestic product (GDP) are projected using two methodological approaches--actuarial and macroeconomic--and under various assumptions. The general equilibrium macroeconomic approach has the advantage of allowing an investigation of the causes of growth in the health care sector and its consequences for the overall economy. DATA COLLECTION METHODS. Simulations are used. PRINCIPAL FINDINGS. Both models unanimously project a continued increase in the ratio of health care expenditures to GDP. Under the most conservative assumptions, that is, robust economic growth, improved demographic trends, or a significant moderation in the rate of health care price inflation, the health care sector will consume more than a quarter of national output by 2065. Under other (perhaps more realistic) assumptions, including a continuation of current trends, both approaches predict that health care expenditures will comprise between a third and a half of national output. In the macroeconomic model, the increasing use of capital goods in the health care sector explains the observed rise in relative prices. Moreover, this "capital deepening" implies that a relatively modest fraction of the labor force is employed in health care and that the rest of the economy is increasingly starved for capital, resulting in a declining standard of living. PMID:8063567

  7. Cow-specific treatment of clinical mastitis: an economic approach.

    PubMed

    Steeneveld, W; van Werven, T; Barkema, H W; Hogeveen, H

    2011-01-01

    Under Dutch circumstances, most clinical mastitis (CM) cases of cows on dairy farms are treated with a standard intramammary antimicrobial treatment. Several antimicrobial treatments are available for CM, differing in antimicrobial compound, route of application, duration, and cost. Because cow factors (e.g., parity, stage of lactation, and somatic cell count history) and the causal pathogen influence the probability of cure, cow-specific treatment of CM is often recommended. The objective of this study was to determine if cow-specific treatment of CM is economically beneficial. Using a stochastic Monte Carlo simulation model, 20,000 CM cases were simulated. These CM cases were caused by Streptococcus uberis and Streptococcus dysgalactiae (40%), Staphylococcus aureus (30%), or Escherichia coli (30%). For each simulated CM case, the consequences of using different antimicrobial treatment regimens (standard 3-d intramammary, extended 5-d intramammary, combination 3-d intramammary+systemic, combination 3-d intramammary+systemic+1-d nonsteroidal antiinflammatory drugs, and combination extended 5-d intramammary+systemic) were simulated simultaneously. Finally, total costs of the 5 antimicrobial treatment regimens were compared. Some inputs for the model were based on literature information and assumptions made by the authors were used if no information was available. Bacteriological cure for each individual cow depended on the antimicrobial treatment regimen, the causal pathogen, and the cow factors parity, stage of lactation, somatic cell count history, CM history, and whether the cow was systemically ill. Total costs for each case depended on treatment costs for the initial CM case (including costs for antibiotics, milk withdrawal, and labor), treatment costs for follow-up CM cases, costs for milk production losses, and costs for culling. Average total costs for CM using the 5 treatments were (US) $224, $247, $253, $260, and $275, respectively. Average probabilities of bacteriological cure for the 5 treatments were 0.53, 0.65, 0.65, 0.68, and 0.75, respectively. For all different simulated CM cases, the standard 3-d intramammary antimicrobial treatment had the lowest total costs. The benefits of lower costs for milk production losses and culling for cases treated with the intensive treatments did not outweigh the higher treatment costs. The stochastic model was developed using information from the literature and assumptions made by the authors. Using these information sources resulted in a difference in effectiveness of different antimicrobial treatments for CM. Based on our assumptions, cow-specific treatment of CM was not economically beneficial. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  8. Does muscle creatine phosphokinase have access to the total pool of phosphocreatine plus creatine?

    PubMed

    Hochachka, P W; Mossey, M K

    1998-03-01

    Two fundamental assumptions underlie currently accepted dogma on creatine phosphokinase (CPK) function in phosphagen-containing cells: 1) CPK always operates near equilibrium and 2) CPK has access to, and reacts with, the entire pool of phosphocreatine (PCr) and creatine (Cr). We tested the latter assumption in fish fast-twitch or white muscle (WM) by introducing [14C]Cr into the WM pool in vivo. To avoid complications arising from working with muscles formed from a mixture of fast and slow fibers, it was advantageous to work with fish WM because it is uniformly fast twitch and is anatomically separated from other fiber types. According to current theory, at steady state after [14C]Cr administration, the specific activities of PCr and Cr should be the same under essentially all conditions. In contrast, we found that, in various metabolic states between rest and recovery from exercise, the specific activity of PCr greatly exceeds that of Cr. The data imply that a significant fraction of Cr is not free to rapidly exchange with exogenously added [14C]Cr. Releasing of this unlabeled or "cold" Cr on acid extraction accounts for lowered specific activities. This unexpected and provocative result is not consistent with traditional models of phosphagen function.

  9. Structural design models for tunnels in soft soil

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duddeck, H.; Erdmann, J.

    In 1982 the ITA (International Tunnelling Association) working group on structural design models for tunnelling published the answers to a questionnaire in the form of a synopsis. As a continuation of that work, results of an investigation on design models for soft ground tunnels are presented and a comparative review of the progress to date in this field is given. The main differences in the assumptions entering the different models are stated. Diagrams for the hoop forces, bending moments and radial displacements shows the differences in the design values evaluated for three different models: (1) the continuum models; (2) themore » design model by Muir Wood; and (3) the bedded beam model without bedding at the crown region. Because a comparison with free parameters necessitates analytical solutions, only circular cross-sections were investigated. Nevertheless the results of the investigation also may be valid to a great extent for noncircular cross-sections and a more refined numerical analyses. It can be shown that there is a trend toward agreement on the proper assumptions and on the design models applied either for shallow or for deep tunnels. As should be expected, the bending moments are sensitive with regard to the model chosen, whereas the hoop forces in the tunnel ring are rather unaffected by the change of ground and lining properties. The significance of the nonlinearity due to geometrical deformations or to plastic behavior is demonstrated from specific examples.« less

  10. WindPACT Reference Wind Turbines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dykes, Katherine L; Rinker, Jennifer

    To fully understand how loads and turbine cost scale with turbine size, it is necessary to have identical turbine models that have been scaled to different rated powers. The report presents the WindPACT baseline models, which are a series of four baseline models that were designed to facilitate investigations into the scalings of loads and turbine cost with size. The models have four different rated powers (750 kW, 1.5 MW, 3.0 MW, and 5.0 MW), and each model was designed to its specified rated power using the same design methodology. The models were originally implemented in FAST_AD, the predecessor tomore » NREL's open-source wind turbine simulator FAST, but have yet to be implemented in FAST. This report contains the specifications for all four WindPACT baseline models - including structural, aerodynamic, and control specifications - along with the inherent assumptions and equations that were used to calculate the model parameters. It is hoped that these baseline models will serve as extremely useful resources for investigations into the scalings of costs, loads, or optimization routines.« less

  11. Choosing the appropriate forecasting model for predictive parameter control.

    PubMed

    Aleti, Aldeida; Moser, Irene; Meedeniya, Indika; Grunske, Lars

    2014-01-01

    All commonly used stochastic optimisation algorithms have to be parameterised to perform effectively. Adaptive parameter control (APC) is an effective method used for this purpose. APC repeatedly adjusts parameter values during the optimisation process for optimal algorithm performance. The assignment of parameter values for a given iteration is based on previously measured performance. In recent research, time series prediction has been proposed as a method of projecting the probabilities to use for parameter value selection. In this work, we examine the suitability of a variety of prediction methods for the projection of future parameter performance based on previous data. All considered prediction methods have assumptions the time series data has to conform to for the prediction method to provide accurate projections. Looking specifically at parameters of evolutionary algorithms (EAs), we find that all standard EA parameters with the exception of population size conform largely to the assumptions made by the considered prediction methods. Evaluating the performance of these prediction methods, we find that linear regression provides the best results by a very small and statistically insignificant margin. Regardless of the prediction method, predictive parameter control outperforms state of the art parameter control methods when the performance data adheres to the assumptions made by the prediction method. When a parameter's performance data does not adhere to the assumptions made by the forecasting method, the use of prediction does not have a notable adverse impact on the algorithm's performance.

  12. Simulating polarized light scattering in terrestrial snow based on bicontinuous random medium and Monte Carlo ray tracing

    NASA Astrophysics Data System (ADS)

    Xiong, Chuan; Shi, Jiancheng

    2014-01-01

    To date, the light scattering models of snow consider very little about the real snow microstructures. The ideal spherical or other single shaped particle assumptions in previous snow light scattering models can cause error in light scattering modeling of snow and further cause errors in remote sensing inversion algorithms. This paper tries to build up a snow polarized reflectance model based on bicontinuous medium, with which the real snow microstructure is considered. The accurate specific surface area of bicontinuous medium can be analytically derived. The polarized Monte Carlo ray tracing technique is applied to the computer generated bicontinuous medium. With proper algorithms, the snow surface albedo, bidirectional reflectance distribution function (BRDF) and polarized BRDF can be simulated. The validation of model predicted spectral albedo and bidirectional reflectance factor (BRF) using experiment data shows good results. The relationship between snow surface albedo and snow specific surface area (SSA) were predicted, and this relationship can be used for future improvement of snow specific surface area (SSA) inversion algorithms. The model predicted polarized reflectance is validated and proved accurate, which can be further applied in polarized remote sensing.

  13. A Longitudinal Test of the Demand–Control Model Using Specific Job Demands and Specific Job Control

    PubMed Central

    van Vegchel, Natasja; Shimazu, Akihito; Schaufeli, Wilmar; Dormann, Christian

    2010-01-01

    Background Supportive studies of the demand–control (DC) model were more likely to measure specific demands combined with a corresponding aspect of control. Purpose A longitudinal test of Karasek’s (Adm Sci Q. 24:285–308, 1) job strain hypothesis including specific measures of job demands and job control, and both self-report and objectively recorded well-being. Method Job strain hypothesis was tested among 267 health care employees from a two-wave Dutch panel survey with a 2-year time lag. Results Significant demand/control interactions were found for mental and emotional demands, but not for physical demands. The association between job demands and job satisfaction was positive in case of high job control, whereas this association was negative in case of low job control. In addition, the relation between job demands and psychosomatic health symptoms/sickness absence was negative in case of high job control and positive in case of low control. Conclusion Longitudinal support was found for the core assumption of the DC model with specific measures of job demands and job control as well as self-report and objectively recorded well-being. PMID:20195810

  14. Defense and the Economy

    DTIC Science & Technology

    1993-01-01

    Assumptions .......................................................... 15 b. Modeling Productivity ...and a macroeconomic model of the U.S. economy, designed to provide long-range projections 3 consistent with trends in production technology, shifts in...investments in roads, bridges, sewer systems, etc. In addition to these modeling assumptions, we also have introduced productivity increases to reflect the

  15. Impact of one-layer assumption on diffuse reflectance spectroscopy of skin

    NASA Astrophysics Data System (ADS)

    Hennessy, Ricky; Markey, Mia K.; Tunnell, James W.

    2015-02-01

    Diffuse reflectance spectroscopy (DRS) can be used to noninvasively measure skin properties. To extract skin properties from DRS spectra, you need a model that relates the reflectance to the tissue properties. Most models are based on the assumption that skin is homogenous. In reality, skin is composed of multiple layers, and the homogeneity assumption can lead to errors. In this study, we analyze the errors caused by the homogeneity assumption. This is accomplished by creating realistic skin spectra using a computational model, then extracting properties from those spectra using a one-layer model. The extracted parameters are then compared to the parameters used to create the modeled spectra. We used a wavelength range of 400 to 750 nm and a source detector separation of 250 μm. Our results show that use of a one-layer skin model causes underestimation of hemoglobin concentration [Hb] and melanin concentration [mel]. Additionally, the magnitude of the error is dependent on epidermal thickness. The one-layer assumption also causes [Hb] and [mel] to be correlated. Oxygen saturation is overestimated when it is below 50% and underestimated when it is above 50%. We also found that the vessel radius factor used to account for pigment packaging is correlated with epidermal thickness.

  16. Forced in-plane vibration of a thick ring on a unilateral elastic foundation

    NASA Astrophysics Data System (ADS)

    Wang, Chunjian; Ayalew, Beshah; Rhyne, Timothy; Cron, Steve; Dailliez, Benoit

    2016-10-01

    Most existing studies of a deformable ring on elastic foundation rely on the assumption of a linear foundation. These assumptions are insufficient in cases where the foundation may have a unilateral stiffness that vanishes in compression or tension such as in non-pneumatic tires and bushing bearings. This paper analyzes the in-plane dynamics of such a thick ring on a unilateral elastic foundation, specifically, on a two-parameter unilateral elastic foundation, where the stiffness of the foundation is treated as linear in the circumferential direction but unilateral (i.e. collapsible or tensionless) in the radial direction. The thick ring is modeled as an orthotropic and extensible circular Timoshenko beam. An arbitrarily distributed time-varying in-plane force is considered as the excitation. The Equations of Motion are explicitly derived and a solution method is proposed that uses an implicit Newmark scheme for the time domain solution and an iterative compensation approach to determine the unilateral zone of the foundation at each time step. The dynamic axle force transmission is also analyzed. Illustrative forced vibration responses obtained from the proposed model and solution method are compared with those obtained from a finite element model.

  17. Quantum oscillations in a bilayer with broken mirror symmetry: A minimal model for YBa 2 Cu 3 O 6 + δ

    DOE PAGES

    Maharaj, Akash V.; Zhang, Yi; Ramshaw, B. J.; ...

    2016-03-01

    Using an exact numerical solution and semiclassical analysis, we investigate quantum oscillations (QOs) in a model of a bilayer system with an anisotropic (elliptical) electron pocket in each plane. Key features of QO experiments in the high temperature superconducting cuprate YBCO can be reproduced by such a model, in particular the pattern of oscillation frequencies (which reflect “magnetic breakdown” between the two pockets) and the polar and azimuthal angular dependence of the oscillation amplitudes. However, the requisite magnetic breakdown is possible only under the assumption that the horizontal mirror plane symmetry is spontaneously broken and that the bilayer tunneling t ⊥ is substantially renormalized from its ‘bare’ value. Lastly, under the assumption that t ⊥ =more » $$\\sim\\atop{Z}_t$$ $$(0)\\atop{⊥}$$, where $$\\sim\\atop{Z}$$ is a measure of the quasiparticle weight, this suggests that $$\\sim\\atop{Z}$$ ≲ 1/20. Detailed comparisons with new YBa 2Cu 3O 6.58 QO data, taken over a very broad range of magnetic field, confirm specific predictions made by the breakdown scenario.« less

  18. A model for foam formation, stability, and breakdown in glass-melting furnaces.

    PubMed

    van der Schaaf, John; Beerkens, Ruud G C

    2006-03-01

    A dynamic model for describing the build-up and breakdown of a glass-melt foam is presented. The foam height is determined by the gas flux to the glass-melt surface and the drainage rate of the liquid lamellae between the gas bubbles. The drainage rate is determined by the average gas bubble radius and the physical properties of the glass melt: density, viscosity, surface tension, and interfacial mobility. Neither the assumption of a fully mobile nor the assumption of a fully immobile glass-melt interface describe the observed foam formation on glass melts adequately. The glass-melt interface appears partially mobile due to the presence of surface active species, e.g., sodium sulfate and silanol groups. The partial mobility can be represented by a single, glass-melt composition specific parameter psi. The value of psi can be estimated from gas bubble lifetime experiments under furnace conditions. With this parameter, laboratory experiments of foam build-up and breakdown in a glass melt are adequately described, qualitatively and quantitatively by a set of ordinary differential equations. An approximate explicit relationship for the prediction of the steady-state foam height is derived from the fundamental model.

  19. An improved probit method for assessment of domino effect to chemical process equipment caused by overpressure.

    PubMed

    Mingguang, Zhang; Juncheng, Jiang

    2008-10-30

    Overpressure is one important cause of domino effect in accidents of chemical process equipments. Damage probability and relative threshold value are two necessary parameters in QRA of this phenomenon. Some simple models had been proposed based on scarce data or oversimplified assumption. Hence, more data about damage to chemical process equipments were gathered and analyzed, a quantitative relationship between damage probability and damage degrees of equipment was built, and reliable probit models were developed associated to specific category of chemical process equipments. Finally, the improvements of present models were evidenced through comparison with other models in literatures, taking into account such parameters: consistency between models and data, depth of quantitativeness in QRA.

  20. LS³: A Method for Improving Phylogenomic Inferences When Evolutionary Rates Are Heterogeneous among Taxa

    PubMed Central

    Rivera-Rivera, Carlos J.; Montoya-Burgos, Juan I.

    2016-01-01

    Phylogenetic inference artifacts can occur when sequence evolution deviates from assumptions made by the models used to analyze them. The combination of strong model assumption violations and highly heterogeneous lineage evolutionary rates can become problematic in phylogenetic inference, and lead to the well-described long-branch attraction (LBA) artifact. Here, we define an objective criterion for assessing lineage evolutionary rate heterogeneity among predefined lineages: the result of a likelihood ratio test between a model in which the lineages evolve at the same rate (homogeneous model) and a model in which different lineage rates are allowed (heterogeneous model). We implement this criterion in the algorithm Locus Specific Sequence Subsampling (LS³), aimed at reducing the effects of LBA in multi-gene datasets. For each gene, LS³ sequentially removes the fastest-evolving taxon of the ingroup and tests for lineage rate homogeneity until all lineages have uniform evolutionary rates. The sequences excluded from the homogeneously evolving taxon subset are flagged as potentially problematic. The software implementation provides the user with the possibility to remove the flagged sequences for generating a new concatenated alignment. We tested LS³ with simulations and two real datasets containing LBA artifacts: a nucleotide dataset regarding the position of Glires within mammals and an amino-acid dataset concerning the position of nematodes within bilaterians. The initially incorrect phylogenies were corrected in all cases upon removing data flagged by LS³. PMID:26912812

  1. 46 CFR 174.070 - General damage stability assumptions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 7 2013-10-01 2013-10-01 false General damage stability assumptions. 174.070 Section 174.070 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) SUBDIVISION AND STABILITY SPECIAL RULES PERTAINING TO SPECIFIC VESSEL TYPES Special Rules Pertaining to Mobile Offshore Drilling...

  2. 46 CFR 174.070 - General damage stability assumptions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 7 2010-10-01 2010-10-01 false General damage stability assumptions. 174.070 Section 174.070 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) SUBDIVISION AND STABILITY SPECIAL RULES PERTAINING TO SPECIFIC VESSEL TYPES Special Rules Pertaining to Mobile Offshore Drilling...

  3. Model Error Estimation for the CPTEC Eta Model

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; daSilva, Arlindo

    1999-01-01

    Statistical data assimilation systems require the specification of forecast and observation error statistics. Forecast error is due to model imperfections and differences between the initial condition and the actual state of the atmosphere. Practical four-dimensional variational (4D-Var) methods try to fit the forecast state to the observations and assume that the model error is negligible. Here with a number of simplifying assumption, a framework is developed for isolating the model error given the forecast error at two lead-times. Two definitions are proposed for the Talagrand ratio tau, the fraction of the forecast error due to model error rather than initial condition error. Data from the CPTEC Eta Model running operationally over South America are used to calculate forecast error statistics and lower bounds for tau.

  4. Model of bidirectional reflectance distribution function for metallic materials

    NASA Astrophysics Data System (ADS)

    Wang, Kai; Zhu, Jing-Ping; Liu, Hong; Hou, Xun

    2016-09-01

    Based on the three-component assumption that the reflection is divided into specular reflection, directional diffuse reflection, and ideal diffuse reflection, a bidirectional reflectance distribution function (BRDF) model of metallic materials is presented. Compared with the two-component assumption that the reflection is composed of specular reflection and diffuse reflection, the three-component assumption divides the diffuse reflection into directional diffuse and ideal diffuse reflection. This model effectively resolves the problem that constant diffuse reflection leads to considerable error for metallic materials. Simulation and measurement results validate that this three-component BRDF model can improve the modeling accuracy significantly and describe the reflection properties in the hemisphere space precisely for the metallic materials.

  5. Generalized antifungal activity and 454-screening of Pseudonocardia and Amycolatopsis bacteria in nests of fungus-growing ants.

    PubMed

    Sen, Ruchira; Ishak, Heather D; Estrada, Dora; Dowd, Scot E; Hong, Eunki; Mueller, Ulrich G

    2009-10-20

    In many host-microbe mutualisms, hosts use beneficial metabolites supplied by microbial symbionts. Fungus-growing (attine) ants are thought to form such a mutualism with Pseudonocardia bacteria to derive antibiotics that specifically suppress the coevolving pathogen Escovopsis, which infects the ants' fungal gardens and reduces growth. Here we test 4 key assumptions of this Pseudonocardia-Escovopsis coevolution model. Culture-dependent and culture-independent (tag-encoded 454-pyrosequencing) surveys reveal that several Pseudonocardia species and occasionally Amycolatopsis (a close relative of Pseudonocardia) co-occur on workers from a single nest, contradicting the assumption of a single pseudonocardiaceous strain per nest. Pseudonocardia can occur on males, suggesting that Pseudonocardia could also be horizontally transmitted during mating. Pseudonocardia and Amycolatopsis secretions kill or strongly suppress ant-cultivated fungi, contradicting the previous finding of a growth-enhancing effect of Pseudonocardia on the cultivars. Attine ants therefore may harm their own cultivar if they apply pseudonocardiaceous secretions to actively growing gardens. Pseudonocardia and Amycolatopsis isolates also show nonspecific antifungal activities against saprotrophic, endophytic, entomopathogenic, and garden-pathogenic fungi, contrary to the original report of specific antibiosis against Escovopsis alone. We conclude that attine-associated pseudonocardiaceous bacteria do not exhibit derived antibiotic properties to specifically suppress Escovopsis. We evaluate hypotheses on nonadaptive and adaptive functions of attine integumental bacteria, and develop an alternate conceptual framework to replace the prevailing Pseudonocardia-Escovopsis coevolution model. If association with Pseudonocardia is adaptive to attine ants, alternate roles of such microbes could include the protection of ants or sanitation of the nest.

  6. Generalized antifungal activity and 454-screening of Pseudonocardia and Amycolatopsis bacteria in nests of fungus-growing ants

    PubMed Central

    Sen, Ruchira; Ishak, Heather D.; Estrada, Dora; Dowd, Scot E.; Hong, Eunki; Mueller, Ulrich G.

    2009-01-01

    In many host-microbe mutualisms, hosts use beneficial metabolites supplied by microbial symbionts. Fungus-growing (attine) ants are thought to form such a mutualism with Pseudonocardia bacteria to derive antibiotics that specifically suppress the coevolving pathogen Escovopsis, which infects the ants' fungal gardens and reduces growth. Here we test 4 key assumptions of this Pseudonocardia-Escovopsis coevolution model. Culture-dependent and culture-independent (tag-encoded 454-pyrosequencing) surveys reveal that several Pseudonocardia species and occasionally Amycolatopsis (a close relative of Pseudonocardia) co-occur on workers from a single nest, contradicting the assumption of a single pseudonocardiaceous strain per nest. Pseudonocardia can occur on males, suggesting that Pseudonocardia could also be horizontally transmitted during mating. Pseudonocardia and Amycolatopsis secretions kill or strongly suppress ant-cultivated fungi, contradicting the previous finding of a growth-enhancing effect of Pseudonocardia on the cultivars. Attine ants therefore may harm their own cultivar if they apply pseudonocardiaceous secretions to actively growing gardens. Pseudonocardia and Amycolatopsis isolates also show nonspecific antifungal activities against saprotrophic, endophytic, entomopathogenic, and garden-pathogenic fungi, contrary to the original report of specific antibiosis against Escovopsis alone. We conclude that attine-associated pseudonocardiaceous bacteria do not exhibit derived antibiotic properties to specifically suppress Escovopsis. We evaluate hypotheses on nonadaptive and adaptive functions of attine integumental bacteria, and develop an alternate conceptual framework to replace the prevailing Pseudonocardia-Escovopsis coevolution model. If association with Pseudonocardia is adaptive to attine ants, alternate roles of such microbes could include the protection of ants or sanitation of the nest. PMID:19805175

  7. Design Considerations for Large Computer Communication Networks,

    DTIC Science & Technology

    1976-04-01

    particular, we will discuss the last three assumptions in order to motivate some of the models to be considered in this chapter. Independence Assumption...channels. fg Part (a), again motivated by an earlier remark on deterministic routing, will become more accurate when we include in the model, based on fixed...hierarchical routing, then this assumption appears to be quite acceptable. Part (b) is motivated by the quite symmetrical structure of the networks considered

  8. Questionable Validity of Poisson Assumptions in a Combined Loglinear/MDS Mapping Model.

    ERIC Educational Resources Information Center

    Gleason, John M.

    1993-01-01

    This response to an earlier article on a combined log-linear/MDS model for mapping journals by citation analysis discusses the underlying assumptions of the Poisson model with respect to characteristics of the citation process. The importance of empirical data analysis is also addressed. (nine references) (LRW)

  9. How to Decide on Modeling Details: Risk and Benefit Assessment.

    PubMed

    Özilgen, Mustafa

    Mathematical models based on thermodynamic, kinetic, heat, and mass transfer analysis are central to this chapter. Microbial growth, death, enzyme inactivation models, and the modeling of material properties, including those pertinent to conduction and convection heating, mass transfer, such as diffusion and convective mass transfer, and thermodynamic properties, such as specific heat, enthalpy, and Gibbs free energy of formation and specific chemical exergy are also needed in this task. The origins, simplifying assumptions, and uses of model equations are discussed in this chapter, together with their benefits. The simplified forms of these models are sometimes referred to as "laws," such as "the first law of thermodynamics" or "Fick's second law." Starting to modeling a study with such "laws" without considering the conditions under which they are valid runs the risk of ending up with erronous conclusions. On the other hand, models started with fundamental concepts and simplified with appropriate considerations may offer explanations for the phenomena which may not be obtained just with measurements or unprocessed experimental data. The discussion presented here is strengthened with case studies and references to the literature.

  10. Diagnosing Diagnostic Models: From Von Neumann's Elephant to Model Equivalencies and Network Psychometrics

    ERIC Educational Resources Information Center

    von Davier, Matthias

    2018-01-01

    This article critically reviews how diagnostic models have been conceptualized and how they compare to other approaches used in educational measurement. In particular, certain assumptions that have been taken for granted and used as defining characteristics of diagnostic models are reviewed and it is questioned whether these assumptions are the…

  11. Data reduction of room tests for zone model validation

    Treesearch

    M. Janssens; H. C. Tran

    1992-01-01

    Compartment fire zone models are based on many simplifying assumptions, in particular that gases stratify in two distinct layers. Because of these assumptions, certain model output is in a form unsuitable for direct comparison to measurements made in full-scale room tests. The experimental data must first be reduced and transformed to be compatible with the model...

  12. A Module Language for Typing by Contracts

    NASA Technical Reports Server (NTRS)

    Glouche, Yann; Talpin, Jean-Pierre; LeGuernic, Paul; Gautier, Thierry

    2009-01-01

    Assume-guarantee reasoning is a popular and expressive paradigm for modular and compositional specification of programs. It is becoming a fundamental concept in some computer-aided design tools for embedded system design. In this paper, we elaborate foundations for contract-based embedded system design by proposing a general-purpose module language based on a Boolean algebra allowing to define contracts. In this framework, contracts are used to negotiate the correctness of assumptions made on the definition of a component at the point where it is used and provides guarantees to its environment. We illustrate this presentation with the specification of a simplified 4-stroke engine model.

  13. Effects of Model Formulation on Estimates of Health in Individual Right Whales (Eubalaena glacialis).

    PubMed

    Schick, Robert S; Kraus, Scott D; Rolland, Rosalind M; Knowlton, Amy R; Hamilton, Philip K; Pettis, Heather M; Thomas, Len; Harwood, John; Clark, James S

    2016-01-01

    Right whales are vulnerable to many sources of anthropogenic disturbance including ship strikes, entanglement with fishing gear, and anthropogenic noise. The effect of these factors on individual health is unclear. A statistical model using photographic evidence of health was recently built to infer the true or hidden health of individual right whales. However, two important prior assumptions about the role of missing data and unexplained variance on the estimates were not previously assessed. Here we tested these factors by varying prior assumptions and model formulation. We found sensitivity to each assumption and used the output to make guidelines on future model formulation.

  14. Non-parametric causality detection: An application to social media and financial data

    NASA Astrophysics Data System (ADS)

    Tsapeli, Fani; Musolesi, Mirco; Tino, Peter

    2017-10-01

    According to behavioral finance, stock market returns are influenced by emotional, social and psychological factors. Several recent works support this theory by providing evidence of correlation between stock market prices and collective sentiment indexes measured using social media data. However, a pure correlation analysis is not sufficient to prove that stock market returns are influenced by such emotional factors since both stock market prices and collective sentiment may be driven by a third unmeasured factor. Controlling for factors that could influence the study by applying multivariate regression models is challenging given the complexity of stock market data. False assumptions about the linearity or non-linearity of the model and inaccuracies on model specification may result in misleading conclusions. In this work, we propose a novel framework for causal inference that does not require any assumption about a particular parametric form of the model expressing statistical relationships among the variables of the study and can effectively control a large number of observed factors. We apply our method in order to estimate the causal impact that information posted in social media may have on stock market returns of four big companies. Our results indicate that social media data not only correlate with stock market returns but also influence them.

  15. Temperature impacts on economic growth warrant stringent mitigation policy

    NASA Astrophysics Data System (ADS)

    Moore, Frances C.; Diaz, Delavane B.

    2015-02-01

    Integrated assessment models compare the costs of greenhouse gas mitigation with damages from climate change to evaluate the social welfare implications of climate policy proposals and inform optimal emissions reduction trajectories. However, these models have been criticized for lacking a strong empirical basis for their damage functions, which do little to alter assumptions of sustained gross domestic product (GDP) growth, even under extreme temperature scenarios. We implement empirical estimates of temperature effects on GDP growth rates in the DICE model through two pathways, total factor productivity growth and capital depreciation. This damage specification, even under optimistic adaptation assumptions, substantially slows GDP growth in poor regions but has more modest effects in rich countries. Optimal climate policy in this model stabilizes global temperature change below 2 °C by eliminating emissions in the near future and implies a social cost of carbon several times larger than previous estimates. A sensitivity analysis shows that the magnitude of climate change impacts on economic growth, the rate of adaptation, and the dynamic interaction between damages and GDP are three critical uncertainties requiring further research. In particular, optimal mitigation rates are much lower if countries become less sensitive to climate change impacts as they develop, making this a major source of uncertainty and an important subject for future research.

  16. Behavioral modeling of human choices reveals dissociable effects of physical effort and temporal delay on reward devaluation.

    PubMed

    Klein-Flügge, Miriam C; Kennerley, Steven W; Saraiva, Ana C; Penny, Will D; Bestmann, Sven

    2015-03-01

    There has been considerable interest from the fields of biology, economics, psychology, and ecology about how decision costs decrease the value of rewarding outcomes. For example, formal descriptions of how reward value changes with increasing temporal delays allow for quantifying individual decision preferences, as in animal species populating different habitats, or normal and clinical human populations. Strikingly, it remains largely unclear how humans evaluate rewards when these are tied to energetic costs, despite the surge of interest in the neural basis of effort-guided decision-making and the prevalence of disorders showing a diminished willingness to exert effort (e.g., depression). One common assumption is that effort discounts reward in a similar way to delay. Here we challenge this assumption by formally comparing competing hypotheses about effort and delay discounting. We used a design specifically optimized to compare discounting behavior for both effort and delay over a wide range of decision costs (Experiment 1). We then additionally characterized the profile of effort discounting free of model assumptions (Experiment 2). Contrary to previous reports, in both experiments effort costs devalued reward in a manner opposite to delay, with small devaluations for lower efforts, and progressively larger devaluations for higher effort-levels (concave shape). Bayesian model comparison confirmed that delay-choices were best predicted by a hyperbolic model, with the largest reward devaluations occurring at shorter delays. In contrast, an altogether different relationship was observed for effort-choices, which were best described by a model of inverse sigmoidal shape that is initially concave. Our results provide a novel characterization of human effort discounting behavior and its first dissociation from delay discounting. This enables accurate modelling of cost-benefit decisions, a prerequisite for the investigation of the neural underpinnings of effort-guided choice and for understanding the deficits in clinical disorders characterized by behavioral inactivity.

  17. Behavioral Modeling of Human Choices Reveals Dissociable Effects of Physical Effort and Temporal Delay on Reward Devaluation

    PubMed Central

    Klein-Flügge, Miriam C.; Kennerley, Steven W.; Saraiva, Ana C.; Penny, Will D.; Bestmann, Sven

    2015-01-01

    There has been considerable interest from the fields of biology, economics, psychology, and ecology about how decision costs decrease the value of rewarding outcomes. For example, formal descriptions of how reward value changes with increasing temporal delays allow for quantifying individual decision preferences, as in animal species populating different habitats, or normal and clinical human populations. Strikingly, it remains largely unclear how humans evaluate rewards when these are tied to energetic costs, despite the surge of interest in the neural basis of effort-guided decision-making and the prevalence of disorders showing a diminished willingness to exert effort (e.g., depression). One common assumption is that effort discounts reward in a similar way to delay. Here we challenge this assumption by formally comparing competing hypotheses about effort and delay discounting. We used a design specifically optimized to compare discounting behavior for both effort and delay over a wide range of decision costs (Experiment 1). We then additionally characterized the profile of effort discounting free of model assumptions (Experiment 2). Contrary to previous reports, in both experiments effort costs devalued reward in a manner opposite to delay, with small devaluations for lower efforts, and progressively larger devaluations for higher effort-levels (concave shape). Bayesian model comparison confirmed that delay-choices were best predicted by a hyperbolic model, with the largest reward devaluations occurring at shorter delays. In contrast, an altogether different relationship was observed for effort-choices, which were best described by a model of inverse sigmoidal shape that is initially concave. Our results provide a novel characterization of human effort discounting behavior and its first dissociation from delay discounting. This enables accurate modelling of cost-benefit decisions, a prerequisite for the investigation of the neural underpinnings of effort-guided choice and for understanding the deficits in clinical disorders characterized by behavioral inactivity. PMID:25816114

  18. Physically based estimation of soil water retention from textural data: General framework, new models, and streamlined existing models

    USGS Publications Warehouse

    Nimmo, J.R.; Herkelrath, W.N.; Laguna, Luna A.M.

    2007-01-01

    Numerous models are in widespread use for the estimation of soil water retention from more easily measured textural data. Improved models are needed for better prediction and wider applicability. We developed a basic framework from which new and existing models can be derived to facilitate improvements. Starting from the assumption that every particle has a characteristic dimension R associated uniquely with a matric pressure ?? and that the form of the ??-R relation is the defining characteristic of each model, this framework leads to particular models by specification of geometric relationships between pores and particles. Typical assumptions are that particles are spheres, pores are cylinders with volume equal to the associated particle volume times the void ratio, and that the capillary inverse proportionality between radius and matric pressure is valid. Examples include fixed-pore-shape and fixed-pore-length models. We also developed alternative versions of the model of Arya and Paris that eliminate its interval-size dependence and other problems. The alternative models are calculable by direct application of algebraic formulas rather than manipulation of data tables and intermediate results, and they easily combine with other models (e.g., incorporating structural effects) that are formulated on a continuous basis. Additionally, we developed a family of models based on the same pore geometry as the widely used unsaturated hydraulic conductivity model of Mualem. Predictions of measurements for different suitable media show that some of the models provide consistently good results and can be chosen based on ease of calculations and other factors. ?? Soil Science Society of America. All rights reserved.

  19. Models in biology: ‘accurate descriptions of our pathetic thinking’

    PubMed Central

    2014-01-01

    In this essay I will sketch some ideas for how to think about models in biology. I will begin by trying to dispel the myth that quantitative modeling is somehow foreign to biology. I will then point out the distinction between forward and reverse modeling and focus thereafter on the former. Instead of going into mathematical technicalities about different varieties of models, I will focus on their logical structure, in terms of assumptions and conclusions. A model is a logical machine for deducing the latter from the former. If the model is correct, then, if you believe its assumptions, you must, as a matter of logic, also believe its conclusions. This leads to consideration of the assumptions underlying models. If these are based on fundamental physical laws, then it may be reasonable to treat the model as ‘predictive’, in the sense that it is not subject to falsification and we can rely on its conclusions. However, at the molecular level, models are more often derived from phenomenology and guesswork. In this case, the model is a test of its assumptions and must be falsifiable. I will discuss three models from this perspective, each of which yields biological insights, and this will lead to some guidelines for prospective model builders. PMID:24886484

  20. Investigating the mixture and subdivision of perceptual and conceptual processing in Japanese memory tests.

    PubMed

    Gabeza, R

    1995-03-01

    The dual nature of the Japanese writing system was used to investigate two assumptions of the processing view of memory transfer: (1) that both perceptual and conceptual processing can contribute to the same memory test (mixture assumption) and (2) that both can be broken into more specific processes (subdivision assumption). Supporting the mixture assumption, a word fragment completion test based on ideographic kanji characters (kanji fragment completion test) was affected by both perceptual (hiragana/kanji script shift) and conceptual (levels-of-processing) study manipulations kanji fragments, because it did not occur with the use of meaningless hiragana fragments. The mixture assumption is also supported by an effect of study script on an implicit conceptual test (sentence completion), and the subdivision assumption is supported by a crossover dissociation between hiragana and kanji fragment completion as a function of study script.

  1. Can a linguistic serial founder effect originating in Africa explain the worldwide phonemic cline?

    PubMed Central

    2016-01-01

    It has been proposed that a serial founder effect could have caused the present observed pattern of global phonemic diversity. Here we present a model that simulates the human range expansion out of Africa and the subsequent spatial linguistic dynamics until today. It does not assume copying errors, Darwinian competition, reduced contrastive possibilities or any other specific linguistic mechanism. We show that the decrease of linguistic diversity with distance (from the presumed origin of the expansion) arises under three assumptions, previously introduced by other authors: (i) an accumulation rate for phonemes; (ii) small phonemic inventories for the languages spoken before the out-of-Africa dispersal; (iii) an increase in the phonemic accumulation rate with the number of speakers per unit area. Numerical simulations show that the predictions of the model agree with the observed decrease of linguistic diversity with increasing distance from the most likely origin of the out-of-Africa dispersal. Thus, the proposal that a serial founder effect could have caused the present observed pattern of global phonemic diversity is viable, if three strong assumptions are satisfied. PMID:27122180

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarrack, A.G.

    The purpose of this report is to document fault tree analyses which have been completed for the Defense Waste Processing Facility (DWPF) safety analysis. Logic models for equipment failures and human error combinations that could lead to flammable gas explosions in various process tanks, or failure of critical support systems were developed for internal initiating events and for earthquakes. These fault trees provide frequency estimates for support systems failures and accidents that could lead to radioactive and hazardous chemical releases both on-site and off-site. Top event frequency results from these fault trees will be used in further APET analyses tomore » calculate accident risk associated with DWPF facility operations. This report lists and explains important underlying assumptions, provides references for failure data sources, and briefly describes the fault tree method used. Specific commitments from DWPF to provide new procedural/administrative controls or system design changes are listed in the ''Facility Commitments'' section. The purpose of the ''Assumptions'' section is to clarify the basis for fault tree modeling, and is not necessarily a list of items required to be protected by Technical Safety Requirements (TSRs).« less

  3. Application of Bayesian inference to the study of hierarchical organization in self-organized complex adaptive systems

    NASA Astrophysics Data System (ADS)

    Knuth, K. H.

    2001-05-01

    We consider the application of Bayesian inference to the study of self-organized structures in complex adaptive systems. In particular, we examine the distribution of elements, agents, or processes in systems dominated by hierarchical structure. We demonstrate that results obtained by Caianiello [1] on Hierarchical Modular Systems (HMS) can be found by applying Jaynes' Principle of Group Invariance [2] to a few key assumptions about our knowledge of hierarchical organization. Subsequent application of the Principle of Maximum Entropy allows inferences to be made about specific systems. The utility of the Bayesian method is considered by examining both successes and failures of the hierarchical model. We discuss how Caianiello's original statements suffer from the Mind Projection Fallacy [3] and we restate his assumptions thus widening the applicability of the HMS model. The relationship between inference and statistical physics, described by Jaynes [4], is reiterated with the expectation that this realization will aid the field of complex systems research by moving away from often inappropriate direct application of statistical mechanics to a more encompassing inferential methodology.

  4. Estimation of effective connectivity via data-driven neural modeling

    PubMed Central

    Freestone, Dean R.; Karoly, Philippa J.; Nešić, Dragan; Aram, Parham; Cook, Mark J.; Grayden, David B.

    2014-01-01

    This research introduces a new method for functional brain imaging via a process of model inversion. By estimating parameters of a computational model, we are able to track effective connectivity and mean membrane potential dynamics that cannot be directly measured using electrophysiological measurements alone. The ability to track the hidden aspects of neurophysiology will have a profound impact on the way we understand and treat epilepsy. For example, under the assumption the model captures the key features of the cortical circuits of interest, the framework will provide insights into seizure initiation and termination on a patient-specific basis. It will enable investigation into the effect a particular drug has on specific neural populations and connectivity structures using minimally invasive measurements. The method is based on approximating brain networks using an interconnected neural population model. The neural population model is based on a neural mass model that describes the functional activity of the brain, capturing the mesoscopic biophysics and anatomical structure. The model is made subject-specific by estimating the strength of intra-cortical connections within a region and inter-cortical connections between regions using a novel Kalman filtering method. We demonstrate through simulation how the framework can be used to track the mechanisms involved in seizure initiation and termination. PMID:25506315

  5. Statistical Hypothesis Testing in Intraspecific Phylogeography: NCPA versus ABC

    PubMed Central

    Templeton, Alan R.

    2009-01-01

    Nested clade phylogeographic analysis (NCPA) and approximate Bayesian computation (ABC) have been used to test phylogeographic hypotheses. Multilocus NCPA tests null hypotheses, whereas ABC discriminates among a finite set of alternatives. The interpretive criteria of NCPA are explicit and allow complex models to be built from simple components. The interpretive criteria of ABC are ad hoc and require the specification of a complete phylogeographic model. The conclusions from ABC are often influenced by implicit assumptions arising from the many parameters needed to specify a complex model. These complex models confound many assumptions so that biological interpretations are difficult. Sampling error is accounted for in NCPA, but ABC ignores important sources of sampling error that creates pseudo-statistical power. NCPA generates the full sampling distribution of its statistics, but ABC only yields local probabilities, which in turn make it impossible to distinguish between a good fitting model, a non-informative model, and an over-determined model. Both NCPA and ABC use approximations, but convergences of the approximations used in NCPA are well defined whereas those in ABC are not. NCPA can analyze a large number of locations, but ABC cannot. Finally, the dimensionality of tested hypothesis is known in NCPA, but not for ABC. As a consequence, the “probabilities” generated by ABC are not true probabilities and are statistically non-interpretable. Accordingly, ABC should not be used for hypothesis testing, but simulation approaches are valuable when used in conjunction with NCPA or other methods that do not rely on highly parameterized models. PMID:19192182

  6. Quantum-like dynamics applied to cognition: a consideration of available options

    NASA Astrophysics Data System (ADS)

    Broekaert, Jan; Basieva, Irina; Blasiak, Pawel; Pothos, Emmanuel M.

    2017-10-01

    Quantum probability theory (QPT) has provided a novel, rich mathematical framework for cognitive modelling, especially for situations which appear paradoxical from classical perspectives. This work concerns the dynamical aspects of QPT, as relevant to cognitive modelling. We aspire to shed light on how the mind's driving potentials (encoded in Hamiltonian and Lindbladian operators) impact the evolution of a mental state. Some existing QPT cognitive models do employ dynamical aspects when considering how a mental state changes with time, but it is often the case that several simplifying assumptions are introduced. What kind of modelling flexibility does QPT dynamics offer without any simplifying assumptions and is it likely that such flexibility will be relevant in cognitive modelling? We consider a series of nested QPT dynamical models, constructed with a view to accommodate results from a simple, hypothetical experimental paradigm on decision-making. We consider Hamiltonians more complex than the ones which have traditionally been employed with a view to explore the putative explanatory value of this additional complexity. We then proceed to compare simple models with extensions regarding both the initial state (e.g. a mixed state with a specific orthogonal decomposition; a general mixed state) and the dynamics (by introducing Hamiltonians which destroy the separability of the initial structure and by considering an open-system extension). We illustrate the relations between these models mathematically and numerically. This article is part of the themed issue `Second quantum revolution: foundational questions'.

  7. Exploring super-Gaussianity toward robust information-theoretical time delay estimation.

    PubMed

    Petsatodis, Theodoros; Talantzis, Fotios; Boukis, Christos; Tan, Zheng-Hua; Prasad, Ramjee

    2013-03-01

    Time delay estimation (TDE) is a fundamental component of speaker localization and tracking algorithms. Most of the existing systems are based on the generalized cross-correlation method assuming gaussianity of the source. It has been shown that the distribution of speech, captured with far-field microphones, is highly varying, depending on the noise and reverberation conditions. Thus the performance of TDE is expected to fluctuate depending on the underlying assumption for the speech distribution, being also subject to multi-path reflections and competitive background noise. This paper investigates the effect upon TDE when modeling the source signal with different speech-based distributions. An information theoretical TDE method indirectly encapsulating higher order statistics (HOS) formed the basis of this work. The underlying assumption of Gaussian distributed source has been replaced by that of generalized Gaussian distribution that allows evaluating the problem under a larger set of speech-shaped distributions, ranging from Gaussian to Laplacian and Gamma. Closed forms of the univariate and multivariate entropy expressions of the generalized Gaussian distribution are derived to evaluate the TDE. The results indicate that TDE based on the specific criterion is independent of the underlying assumption for the distribution of the source, for the same covariance matrix.

  8. Pu-239 organ specific dosimetric model applied to non-human biota

    NASA Astrophysics Data System (ADS)

    Kaspar, Matthew Jason

    There are few locations throughout the world, like the Maralinga nuclear test site located in south western Australia, where sufficient plutonium contaminate concentration levels exist that they can be utilized for studies of the long-term radionuclide accumulation in non-human biota. The information obtained will be useful for the potential human users of the site while also keeping with international efforts to better understand doses to non-human biota. In particular, this study focuses primarily on a rabbit sample set collected from the population located within the site. Our approach is intended to employ the same dose and dose rate methods selected by the International Commission on Radiological Protection and adapted by the scientific community for similar research questions. These models rely on a series of simplifying assumptions on biota and their geometry; in particular; organisms are treated as spherical and ellipsoidal representations displaying the animal mass and volume. These simplifications assume homogeneity of all animal tissues. In collaborative efforts between Colorado State University and the Australian Nuclear Science and Technology Organisation (ANSTO), we are expanding current knowledge on radionuclide accumulation in specific organs causing organ-specific dose rates, such as Pu-239 accumulating in bone, liver, and lungs. Organ-specific dose models have been developed for humans; however, little has been developed for the dose assessment to biota, in particular rabbits. This study will determine if it is scientifically valid to use standard software, in particular ERICA Tool, as a means to determine organ-specific dosimetry due to Pu-239 accumulation in organs. ERICA Tool is normally applied to whole organisms as a means to determine radiological risk to whole ecosystems. We will focus on the aquatic model within ERICA Tool, as animal organs, like aquatic organisms, can be assumed to lie within an infinite uniform medium. This model would scientifically be valid for radionuclides emitting short-range radiation, as with Pu-239, where the energy is deposited locally. Two MCNPX models have been created and evaluated against ERICA Tool's aquatic model. One MCNPX model replicates ERICA Tool's intrinsic assumptions while the other uses a more realistic animal model adopted by ICRP Publication 108 and ERICA Tool for the organs "infinite" surrounding universe. In addition, the role of model geometry will be analyzed by focusing on four geometry sets for the same organ, including a spherical geometry. ERICA Tool will be compared to MCNPX results within and between each organ geometry set. In addition, the organ absorbed dose rate will be calculated for six rabbits located on the Maralinga nuclear test site as a preliminary test for further investigation. Data in all cases will be compared using percent differences and Student's t-test with respect to ERICA Tool's results and the overall average organ mean absorbed dose rate.

  9. Comparison of NGA-West2 directivity models

    USGS Publications Warehouse

    Spudich, Paul A.; Rowshandel, Badie; Shahi, Shrey; Baker, Jack W.; Chiou, Brian S-J

    2014-01-01

    Five directivity models have been developed based on data from the NGA-West2 database and based on numerical simulations of large strike-slip and reverse-slip earthquakes. All models avoid the use of normalized rupture dimension, enabling them to scale up to the largest earthquakes in a physically reasonable way. Four of the five models are explicitly “narrow-band” (in which the effect of directivity is maximum at a specific period that is a function of earthquake magnitude). Several strategies for determining the zero-level for directivity have been developed. We show comparisons of maps of the directivity amplification. This comparison suggests that the predicted geographic distributions of directivity amplification are dominated by effects of the models' assumptions, and more than one model should be used for ruptures dipping less than about 65 degrees.

  10. Comparative Study of Shrinkage and Non-Shrinkage Model of Food Drying

    NASA Astrophysics Data System (ADS)

    Shahari, N.; Jamil, N.; Rasmani, KA.

    2016-08-01

    A single phase heat and mass model has always been used to represent the moisture and temperature distribution during the drying of food. Several effects of the drying process, such as physical and structural changes, have been considered in order to increase understanding of the movement of water and temperature. However, the comparison between the heat and mass equation with and without structural change (in terms of shrinkage), which can affect the accuracy of the prediction model, has been little investigated. In this paper, two mathematical models to describe the heat and mass transfer in food, with and without the assumption of structural change, were analysed. The equations were solved using the finite difference method. The converted coordinate system was introduced within the numerical computations for the shrinkage model. The result shows that the temperature with shrinkage predicts a higher temperature at a specific time compared to that of the non-shrinkage model. Furthermore, the predicted moisture content decreased faster at a specific time when the shrinkage effect was included in the model.

  11. Genetic dissection of the consensus sequence for the class 2 and class 3 flagellar promoters

    PubMed Central

    Wozniak, Christopher E.; Hughes, Kelly T.

    2008-01-01

    Summary Computational searches for DNA binding sites often utilize consensus sequences. These search models make assumptions that the frequency of a base pair in an alignment relates to the base pair’s importance in binding and presume that base pairs contribute independently to the overall interaction with the DNA binding protein. These two assumptions have generally been found to be accurate for DNA binding sites. However, these assumptions are often not satisfied for promoters, which are involved in additional steps in transcription initiation after RNA polymerase has bound to the DNA. To test these assumptions for the flagellar regulatory hierarchy, class 2 and class 3 flagellar promoters were randomly mutagenized in Salmonella. Important positions were then saturated for mutagenesis and compared to scores calculated from the consensus sequence. Double mutants were constructed to determine how mutations combined for each promoter type. Mutations in the binding site for FlhD4C2, the activator of class 2 promoters, better satisfied the assumptions for the binding model than did mutations in the class 3 promoter, which is recognized by the σ28 transcription factor. These in vivo results indicate that the activator sites within flagellar promoters can be modeled using simple assumptions but that the DNA sequences recognized by the flagellar sigma factor require more complex models. PMID:18486950

  12. Estimating inverse probability weights using super learner when weight-model specification is unknown in a marginal structural Cox model context.

    PubMed

    Karim, Mohammad Ehsanul; Platt, Robert W

    2017-06-15

    Correct specification of the inverse probability weighting (IPW) model is necessary for consistent inference from a marginal structural Cox model (MSCM). In practical applications, researchers are typically unaware of the true specification of the weight model. Nonetheless, IPWs are commonly estimated using parametric models, such as the main-effects logistic regression model. In practice, assumptions underlying such models may not hold and data-adaptive statistical learning methods may provide an alternative. Many candidate statistical learning approaches are available in the literature. However, the optimal approach for a given dataset is impossible to predict. Super learner (SL) has been proposed as a tool for selecting an optimal learner from a set of candidates using cross-validation. In this study, we evaluate the usefulness of a SL in estimating IPW in four different MSCM simulation scenarios, in which we varied the specification of the true weight model specification (linear and/or additive). Our simulations show that, in the presence of weight model misspecification, with a rich and diverse set of candidate algorithms, SL can generally offer a better alternative to the commonly used statistical learning approaches in terms of MSE as well as the coverage probabilities of the estimated effect in an MSCM. The findings from the simulation studies guided the application of the MSCM in a multiple sclerosis cohort from British Columbia, Canada (1995-2008), to estimate the impact of beta-interferon treatment in delaying disability progression. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  13. Discerning strain effects in microbial dose-response data.

    PubMed

    Coleman, Margaret E; Marks, Harry M; Golden, Neal J; Latimer, Heejeong K

    In order to estimate the risk or probability of adverse events in risk assessment, it is necessary to identify the important variables that contribute to the risk and provide descriptions of distributions of these variables for well-defined populations. One component of modeling dose response that can create uncertainty is the inherent genetic variability among pathogenic bacteria. For many microbial risk assessments, the "default" assumption used for dose response does not account for strain or serotype variability in pathogenicity and virulence, other than perhaps, recognizing the existence of avirulent strains. However, an examination of data sets from human clinical trials in which Salmonella spp. and Campylobacter jejuni strains were administered reveals significant strain differences. This article discusses the evidence for strain variability and concludes that more biologically based alternatives are necessary to replace the default assumptions commonly used in microbial risk assessment, specifically regarding strain variability.

  14. Evaluation of 2D shallow-water model for spillway flow with a complex geometry

    USDA-ARS?s Scientific Manuscript database

    Although the two-dimensional (2D) shallow water model is formulated based on several assumptions such as hydrostatic pressure distribution and vertical velocity is negligible, as a simple alternative to the complex 3D model, it has been used to compute water flows in which these assumptions may be ...

  15. Accommodating Missing Data in Mixture Models for Classification by Opinion-Changing Behavior.

    ERIC Educational Resources Information Center

    Hill, Jennifer L.

    2001-01-01

    Explored the assumptions implicit in models reflecting three different approaches to missing survey response data using opinion data collected from Swiss citizens at four time points over nearly 2 years. Results suggest that the latently ignorable model has the least restrictive structural assumptions. Discusses the idea of "durable…

  16. In the Opponent's Shoes: Increasing the Behavioral Validity of Attackers' Judgments in Counterterrorism Models.

    PubMed

    Sri Bhashyam, Sumitra; Montibeller, Gilberto

    2016-04-01

    A key objective for policymakers and analysts dealing with terrorist threats is trying to predict the actions that malicious agents may take. A recent trend in counterterrorism risk analysis is to model the terrorists' judgments, as these will guide their choices of such actions. The standard assumptions in most of these models are that terrorists are fully rational, following all the normative desiderata required for rational choices, such as having a set of constant and ordered preferences, being able to perform a cost-benefit analysis of their alternatives, among many others. However, are such assumptions reasonable from a behavioral perspective? In this article, we analyze the types of assumptions made across various counterterrorism analytical models that represent malicious agents' judgments and discuss their suitability from a descriptive point of view. We then suggest how some of these assumptions could be modified to describe terrorists' preferences more accurately, by drawing knowledge from the fields of behavioral decision research, politics, philosophy of choice, public choice, and conflict management in terrorism. Such insight, we hope, might help make the assumptions of these models more behaviorally valid for counterterrorism risk analysis. © 2016 The Authors Wound Repair and Regeneration published by Wiley Periodicals, Inc. on behalf of The Wound Healing Society.

  17. Exploring the Estimation of Examinee Locations Using Multidimensional Latent Trait Models under Different Distributional Assumptions

    ERIC Educational Resources Information Center

    Jang, Hyesuk

    2014-01-01

    This study aims to evaluate a multidimensional latent trait model to determine how well the model works in various empirical contexts. Contrary to the assumption of these latent trait models that the traits are normally distributed, situations in which the latent trait is not shaped with a normal distribution may occur (Sass et al, 2008; Woods…

  18. Will Organic Synthesis Within Icy Grains or on Dust Surfaces in the Primitive Solar Nebula Completely Erase the Effects of Photochemical Self Shielding?

    NASA Technical Reports Server (NTRS)

    Nuth, Joseph A., III; Johnson, Natasha M.

    2012-01-01

    There are at least 3 separate photochemical self-shielding models with different degrees of commonality. All of these models rely on the selective absorption of (12))C(16)O dissociative photons as the radiation source penetrates through the gas allowing the production of reactive O-17 and O-18 atoms within a specific volume. Each model also assumes that the undissociated C(16)O is stable and does not participate in the chemistry of nebular dust grains. In what follows we will argue that this last, very important assumption is simply not true despite the very high energy of the CO molecular bond.

  19. Adressing optimality principles in DGVMs: Dynamics of Carbon allocation changes

    NASA Astrophysics Data System (ADS)

    Pietsch, Stephan

    2017-04-01

    DGVMs are designed to reproduce and quantify ecosystem processes. Based on plant functions or species specific parameter sets, the energy, carbon, nitrogen and water cycles of different ecosystems are assessed. These models have been proven to be important tools to investigate ecosystem fluxes as they are derived by plant, site and environmental factors. The general model approach assumes steady state conditions and constant model parameters. Both assumptions, however, are wrong, since: (i) No given ecosystem ever is at steady state! (ii) Ecosystems have the capability to adapt to changes in growth conditions, e.g. via changes in allocation patterns! This presentation will give examples how these general failures within current DGVMs may be addressed.

  20. Adressing optimality principles in DGVMs: Dynamics of Carbon allocation changes.

    NASA Astrophysics Data System (ADS)

    Pietsch, S.

    2016-12-01

    DGVMs are designed to reproduce and quantify ecosystem processes. Based on plant functions or species specific parameter sets, the energy, carbon, nitrogen and water cycles of different ecosystems are assessed. These models have been proven to be important tools to investigate ecosystem fluxes as they are derived by plant, site and environmental factors. The general model approach assumes steady state conditions and constant model parameters. Both assumptions, however, are wrong. Any given ecosystem never is at steady state! Ecosystems have the capability to adapt to changes in growth conditions, e.g. via changes in allocation patterns! This presentation will give examples how these general failures within current DGVMs may be addressed.

  1. The Change Grid and the Active Client: Challenging the Assumptions of Change Agentry in the Penal Process.

    ERIC Educational Resources Information Center

    Klofas, John; Duffee, David E.

    1981-01-01

    Reexamines the assumptions of the change grid regarding the channeling of masses of clients into change strategies programs. Penal organizations specifically select and place clients so that programs remain stable, rather than sequence programs to meet the needs of clients. (Author)

  2. Stirling Engine External Heat System Design with Heat Pipe Heater.

    DTIC Science & Technology

    1986-07-01

    Figure 10. However, the evaporator analysis is greatly simplified by making the conservative assumption of constant heat flux. This assumption results in...number Cold Start Data * " ROM density of the metal, gr/cm 3 CAPM specific heat of the metal, cal./gr. K ETHG effective gauze thickness: the

  3. Does Artificial Neural Network Support Connectivism's Assumptions?

    ERIC Educational Resources Information Center

    AlDahdouh, Alaa A.

    2017-01-01

    Connectivism was presented as a learning theory for the digital age and connectivists claim that recent developments in Artificial Intelligence (AI) and, more specifically, Artificial Neural Network (ANN) support their assumptions of knowledge connectivity. Yet, very little has been done to investigate this brave allegation. Does the advancement…

  4. Evil acts and malicious gossip: a multiagent model of the effects of gossip in socially distributed person perception.

    PubMed

    Smith, Eliot R

    2014-11-01

    Although person perception is central to virtually all human social behavior, it is ordinarily studied in isolated individual perceivers. Conceptualizing it as a socially distributed process opens up a variety of novel issues, which have been addressed in scattered literatures mostly outside of social psychology. This article examines some of these issues using a series of multiagent models. Perceivers can use gossip (information from others about social targets) to improve their ability to detect targets who perform rare negative behaviors. The model suggests that they can simultaneously protect themselves against being influenced by malicious gossip intended to defame specific targets. They can balance these potentially conflicting goals by using specific strategies including disregarding gossip that differs from a personally obtained impression. Multiagent modeling demonstrates the outcomes produced by different combinations of assumptions about gossip, and suggests directions for further research and theoretical development. © 2014 by the Society for Personality and Social Psychology, Inc.

  5. Radar Reflectivity in Wingtip-Generated Wake Vortices

    NASA Technical Reports Server (NTRS)

    Marshall, Robert E.; Mudukutore, Ashok; Wissel, Vicki

    1997-01-01

    This report documents new predictive models of radar reflectivity, with meter-scale resolution, for aircraft wakes in clear air and fog. The models result from a radar design program to locate and quantify wake vortices from commercial aircraft in support of the NASA Aircraft Vortex Spacing System (AVOSS). The radar reflectivity model for clear air assumes: 1) turbulent eddies in the wake produce small discontinuities in radar refractive index; and 2) these turbulent eddies are in the 'inertial subrange' of turbulence. From these assumptions, the maximum radar frequency for detecting a particular aircraft wake, as well as the refractive index structure constant and radar volume reflectivity in the wake can be obtained from the NASA Terminal Area Simulation System (TASS) output. For fog conditions, an empirical relationship is used to calculate radar reflectivity factor from TASS output of bulk liquid water. Currently, two models exist: 1) Atlas-based on observations of liquid water and radar reflectivity factor in clouds; and 2) de Wolf- specifically tailored to a specific measured dataset (1992 Vandenberg Air Force Base).

  6. Evaluation of Two Crew Module Boilerplate Tests Using Newly Developed Calibration Metrics

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.

    2012-01-01

    The paper discusses a application of multi-dimensional calibration metrics to evaluate pressure data from water drop tests of the Max Launch Abort System (MLAS) crew module boilerplate. Specifically, three metrics are discussed: 1) a metric to assess the probability of enveloping the measured data with the model, 2) a multi-dimensional orthogonality metric to assess model adequacy between test and analysis, and 3) a prediction error metric to conduct sensor placement to minimize pressure prediction errors. Data from similar (nearly repeated) capsule drop tests shows significant variability in the measured pressure responses. When compared to expected variability using model predictions, it is demonstrated that the measured variability cannot be explained by the model under the current uncertainty assumptions.

  7. A model of interval timing by neural integration

    PubMed Central

    Simen, Patrick; Balci, Fuat; deSouza, Laura; Cohen, Jonathan D.; Holmes, Philip

    2011-01-01

    We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes; that correlations among them can be largely cancelled by balancing excitation and inhibition; that neural populations can act as integrators; and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule’s predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior. PMID:21697374

  8. Is herpes zoster vaccination likely to be cost-effective in Canada?

    PubMed

    Peden, Alexander D; Strobel, Stephenson B; Forget, Evelyn L

    2014-05-30

    To synthesize the current literature detailing the cost-effectiveness of the herpes zoster (HZ) vaccine, and to provide Canadian policy-makers with cost-effectiveness measurements in a Canadian context. This article builds on an existing systematic review of the HZ vaccine that offers a quality assessment of 11 recent articles. We first replicated this study, and then two assessors reviewed the articles and extracted information on vaccine effectiveness, cost of HZ, other modelling assumptions and QALY estimates. Then we transformed the results into a format useful for Canadian policy decisions. Results expressed in different currencies from different years were converted into 2012 Canadian dollars using Bank of Canada exchange rates and a Consumer Price Index deflator. Modelling assumptions that varied between studies were synthesized. We tabled the results for comparability. The Szucs systematic review presented a thorough methodological assessment of the relevant literature. However, the various studies presented results in a variety of currencies, and based their analyses on disparate methodological assumptions. Most of the current literature uses Markov chain models to estimate HZ prevalence. Cost assumptions, discount rate assumptions, assumptions about vaccine efficacy and waning and epidemiological assumptions drove variation in the outcomes. This article transforms the results into a table easily understood by policy-makers. The majority of the current literature shows that HZ vaccination is cost-effective at the price of $100,000 per QALY. Few studies showed that vaccination cost-effectiveness was higher than this threshold, and only under conservative assumptions. Cost-effectiveness was sensitive to vaccine price and discount rate.

  9. ASSIST user manual

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.; Boerschlein, David P.

    1995-01-01

    Semi-Markov models can be used to analyze the reliability of virtually any fault-tolerant system. However, the process of delineating all the states and transitions in a complex system model can be devastatingly tedious and error prone. The Abstract Semi-Markov Specification Interface to the SURE Tool (ASSIST) computer program allows the user to describe the semi-Markov model in a high-level language. Instead of listing the individual model states, the user specifies the rules governing the behavior of the system, and these are used to generate the model automatically. A few statements in the abstract language can describe a very large, complex model. Because no assumptions are made about the system being modeled, ASSIST can be used to generate models describing the behavior of any system. The ASSIST program and its input language are described and illustrated by examples.

  10. Global warming and extinctions of endemic species from biodiversity hotspots.

    PubMed

    Malcolm, Jay R; Liu, Canran; Neilson, Ronald P; Hansen, Lara; Hannah, Lee

    2006-04-01

    Global warming is a key threat to biodiversity, but few researchers have assessed the magnitude of this threat at the global scale. We used major vegetation types (biomes) as proxies for natural habitats and, based on projected future biome distributions under doubled-CO2 climates, calculated changes in habitat areas and associated extinctions of endemic plant and vertebrate species in biodiversity hotspots. Because of numerous uncertainties in this approach, we undertook a sensitivity analysis of multiple factors that included (1) two global vegetation models, (2) different numbers of biome classes in our biome classification schemes, (3) different assumptions about whether species distributions were biome specific or not, and (4) different migration capabilities. Extinctions were calculated using both species-area and endemic-area relationships. In addition, average required migration rates were calculated for each hotspot assuming a doubled-CO2 climate in 100 years. Projected percent extinctions ranged from <1 to 43% of the endemic biota (average 11.6%), with biome specificity having the greatest influence on the estimates, followed by the global vegetation model and then by migration and biome classification assumptions. Bootstrap comparisons indicated that effects on hotpots as a group were not significantly different from effects on random same-biome collections of grid cells with respect to biome change or migration rates; in some scenarios, however, botspots exhibited relatively high biome change and low migration rates. Especially vulnerable hotspots were the Cape Floristic Region, Caribbean, Indo-Burma, Mediterranean Basin, Southwest Australia, and Tropical Andes, where plant extinctions per hotspot sometimes exceeded 2000 species. Under the assumption that projected habitat changes were attained in 100 years, estimated global-warming-induced rates of species extinctions in tropical hotspots in some cases exceeded those due to deforestation, supporting suggestions that global warming is one of the most serious threats to the planet's biodiversity.

  11. Using simulation to aid trial design: Ring-vaccination trials.

    PubMed

    Hitchings, Matt David Thomas; Grais, Rebecca Freeman; Lipsitch, Marc

    2017-03-01

    The 2014-6 West African Ebola epidemic highlights the need for rigorous, rapid clinical trial methods for vaccines. A challenge for trial design is making sample size calculations based on incidence within the trial, total vaccine effect, and intracluster correlation, when these parameters are uncertain in the presence of indirect effects of vaccination. We present a stochastic, compartmental model for a ring vaccination trial. After identification of an index case, a ring of contacts is recruited and either vaccinated immediately or after 21 days. The primary outcome of the trial is total vaccine effect, counting cases only from a pre-specified window in which the immediate arm is assumed to be fully protected and the delayed arm is not protected. Simulation results are used to calculate necessary sample size and estimated vaccine effect. Under baseline assumptions about vaccine properties, monthly incidence in unvaccinated rings and trial design, a standard sample-size calculation neglecting dynamic effects estimated that 7,100 participants would be needed to achieve 80% power to detect a difference in attack rate between arms, while incorporating dynamic considerations in the model increased the estimate to 8,900. This approach replaces assumptions about parameters at the ring level with assumptions about disease dynamics and vaccine characteristics at the individual level, so within this framework we were able to describe the sensitivity of the trial power and estimated effect to various parameters. We found that both of these quantities are sensitive to properties of the vaccine, to setting-specific parameters over which investigators have little control, and to parameters that are determined by the study design. Incorporating simulation into the trial design process can improve robustness of sample size calculations. For this specific trial design, vaccine effectiveness depends on properties of the ring vaccination design and on the measurement window, as well as the epidemiologic setting.

  12. Probabilistic Material Strength Degradation Model for Inconel 718 Components Subjected to High Temperature, Mechanical Fatigue, Creep and Thermal Fatigue Effects

    NASA Technical Reports Server (NTRS)

    Bast, Callie Corinne Scheidt

    1994-01-01

    This thesis presents the on-going development of methodology for a probabilistic material strength degradation model. The probabilistic model, in the form of a postulated randomized multifactor equation, provides for quantification of uncertainty in the lifetime material strength of aerospace propulsion system components subjected to a number of diverse random effects. This model is embodied in the computer program entitled PROMISS, which can include up to eighteen different effects. Presently, the model includes four effects that typically reduce lifetime strength: high temperature, mechanical fatigue, creep, and thermal fatigue. Statistical analysis was conducted on experimental Inconel 718 data obtained from the open literature. This analysis provided regression parameters for use as the model's empirical material constants, thus calibrating the model specifically for Inconel 718. Model calibration was carried out for four variables, namely, high temperature, mechanical fatigue, creep, and thermal fatigue. Methodology to estimate standard deviations of these material constants for input into the probabilistic material strength model was developed. Using the current version of PROMISS, entitled PROMISS93, a sensitivity study for the combined effects of mechanical fatigue, creep, and thermal fatigue was performed. Results, in the form of cumulative distribution functions, illustrated the sensitivity of lifetime strength to any current value of an effect. In addition, verification studies comparing a combination of mechanical fatigue and high temperature effects by model to the combination by experiment were conducted. Thus, for Inconel 718, the basic model assumption of independence between effects was evaluated. Results from this limited verification study strongly supported this assumption.

  13. Comparison of the binary logistic and skewed logistic (Scobit) models of injury severity in motor vehicle collisions.

    PubMed

    Tay, Richard

    2016-03-01

    The binary logistic model has been extensively used to analyze traffic collision and injury data where the outcome of interest has two categories. However, the assumption of a symmetric distribution may not be a desirable property in some cases, especially when there is a significant imbalance in the two categories of outcome. This study compares the standard binary logistic model with the skewed logistic model in two cases in which the symmetry assumption is violated in one but not the other case. The differences in the estimates, and thus the marginal effects obtained, are significant when the assumption of symmetry is violated. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Technoeconomic assumptions adopted for the development of a long-term electricity supply model for Cyprus.

    PubMed

    Taliotis, Constantinos; Taibi, Emanuele; Howells, Mark; Rogner, Holger; Bazilian, Morgan; Welsch, Manuel

    2017-10-01

    The generation mix of Cyprus has been dominated by oil products for decades. In order to conform with European Union and international legislation, a transformation of the supply system is called for. Energy system models can facilitate energy planning into the future, but a large volume of data is required to populate such models. The present data article provides information on key modelling assumptions and input data adopted with the aim of representing the electricity supply system of Cyprus in a separate research article. Data in regards to renewable energy technoeconomic characteristics and investment cost projections, fossil fuel price projections, storage technology characteristics and system operation assumptions are described in this article.

  15. Population Health in Canada: A Brief Critique

    PubMed Central

    Coburn, David; Denny, Keith; Mykhalovskiy, Eric; McDonough, Peggy; Robertson, Ann; Love, Rhonda

    2003-01-01

    An internationally influential model of population health was developed in Canada in the 1990s, shifting the research agenda beyond health care to the social and economic determinants of health. While agreeing that health has important social determinants, the authors believe that this model has serious shortcomings; they critique the model by focusing on its hidden assumptions. Assumptions about how knowledge is produced and an implicit interest group perspective exclude the sociopolitical and class contexts that shape interest group power and citizen health. Overly rationalist assumptions about change understate the role of agency. The authors review the policy and practice implications of the Canadian population health model and point to alternative ways of viewing the determinants of health. PMID:12604479

  16. Analyses of School Commuting Data for Exposure Modeling Purposes

    EPA Science Inventory

    Human exposure models often make the simplifying assumption that school children attend school in the same Census tract where they live. This paper analyzes that assumption and provides information on the temporal and spatial distributions associated with school commuting. The d...

  17. Development and Current Status of the “Cambridge” Loudness Models

    PubMed Central

    2014-01-01

    This article reviews the evolution of a series of models of loudness developed in Cambridge, UK. The first model, applicable to stationary sounds, was based on modifications of the model developed by Zwicker, including the introduction of a filter to allow for the effects of transfer of sound through the outer and middle ear prior to the calculation of an excitation pattern, and changes in the way that the excitation pattern was calculated. Later, modifications were introduced to the assumed middle-ear transfer function and to the way that specific loudness was calculated from excitation level. These modifications led to a finite calculated loudness at absolute threshold, which made it possible to predict accurately the absolute thresholds of broadband and narrowband sounds, based on the assumption that the absolute threshold corresponds to a fixed small loudness. The model was also modified to give predictions of partial loudness—the loudness of one sound in the presence of another. This allowed predictions of masked thresholds based on the assumption that the masked threshold corresponds to a fixed small partial loudness. Versions of the model for time-varying sounds were developed, which allowed prediction of the masked threshold of any sound in a background of any other sound. More recent extensions incorporate binaural processing to account for the summation of loudness across ears. In parallel, versions of the model for predicting loudness for hearing-impaired ears have been developed and have been applied to the development of methods for fitting multichannel compression hearing aids. PMID:25315375

  18. Sensitivity of Rooftop PV Projections in the SunShot Vision Study to Market Assumptions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drury, E.; Denholm, P.; Margolis, R.

    2013-01-01

    The SunShot Vision Study explored the potential growth of solar markets if solar prices decreased by about 75% from 2010 to 2020. The SolarDS model was used to simulate rooftop PV demand for this study, based on several PV market assumptions--future electricity rates, customer access to financing, and others--in addition to the SunShot PV price projections. This paper finds that modeled PV demand is highly sensitive to several non-price market assumptions, particularly PV financing parameters.

  19. Boosting multi-state models.

    PubMed

    Reulen, Holger; Kneib, Thomas

    2016-04-01

    One important goal in multi-state modelling is to explore information about conditional transition-type-specific hazard rate functions by estimating influencing effects of explanatory variables. This may be performed using single transition-type-specific models if these covariate effects are assumed to be different across transition-types. To investigate whether this assumption holds or whether one of the effects is equal across several transition-types (cross-transition-type effect), a combined model has to be applied, for instance with the use of a stratified partial likelihood formulation. Here, prior knowledge about the underlying covariate effect mechanisms is often sparse, especially about ineffectivenesses of transition-type-specific or cross-transition-type effects. As a consequence, data-driven variable selection is an important task: a large number of estimable effects has to be taken into account if joint modelling of all transition-types is performed. A related but subsequent task is model choice: is an effect satisfactory estimated assuming linearity, or is the true underlying nature strongly deviating from linearity? This article introduces component-wise Functional Gradient Descent Boosting (short boosting) for multi-state models, an approach performing unsupervised variable selection and model choice simultaneously within a single estimation run. We demonstrate that features and advantages in the application of boosting introduced and illustrated in classical regression scenarios remain present in the transfer to multi-state models. As a consequence, boosting provides an effective means to answer questions about ineffectiveness and non-linearity of single transition-type-specific or cross-transition-type effects.

  20. Investigation of hydrometeor classification uncertainties through the POLARRIS polarimetric radar simulator

    NASA Astrophysics Data System (ADS)

    Dolan, B.; Rutledge, S. A.; Barnum, J. I.; Matsui, T.; Tao, W. K.; Iguchi, T.

    2017-12-01

    POLarimetric Radar Retrieval and Instrument Simulator (POLARRIS) is a framework that has been developed to simulate radar observations from cloud resolving model (CRM) output and subject model data and observations to the same retrievals, analysis and visualization. This framework not only enables validation of bulk microphysical model simulated properties, but also offers an opportunity to study the uncertainties associated with retrievals such as hydrometeor classification (HID). For the CSU HID, membership beta functions (MBFs) are built using a set of simulations with realistic microphysical assumptions about axis ratio, density, canting angles, size distributions for each of ten hydrometeor species. These assumptions are tested using POLARRIS to understand their influence on the resulting simulated polarimetric data and final HID classification. Several of these parameters (density, size distributions) are set by the model microphysics, and therefore the specific assumptions of axis ratio and canting angle are carefully studied. Through these sensitivity studies, we hope to be able to provide uncertainties in retrieved polarimetric variables and HID as applied to CRM output. HID retrievals assign a classification to each point by determining the highest score, thereby identifying the dominant hydrometeor type within a volume. However, in nature, there is rarely just one a single hydrometeor type at a particular point. Models allow for mixing ratios of different hydrometeors within a grid point. We use the mixing ratios from CRM output in concert with the HID scores and classifications to understand how the HID algorithm can provide information about mixtures within a volume, as well as calculate a confidence in the classifications. We leverage the POLARRIS framework to additionally probe radar wavelength differences toward the possibility of a multi-wavelength HID which could utilize the strengths of different wavelengths to improve HID classifications. With these uncertainties and algorithm improvements, cases of convection are studied in a continental (Oklahoma) and maritime (Darwin, Australia) regime. Observations from C-band polarimetric data in both locations are compared to CRM simulations from NU-WRF using the POLARRIS framework.

  1. Inference of quantitative models of bacterial promoters from time-series reporter gene data.

    PubMed

    Stefan, Diana; Pinel, Corinne; Pinhal, Stéphane; Cinquemani, Eugenio; Geiselmann, Johannes; de Jong, Hidde

    2015-01-01

    The inference of regulatory interactions and quantitative models of gene regulation from time-series transcriptomics data has been extensively studied and applied to a range of problems in drug discovery, cancer research, and biotechnology. The application of existing methods is commonly based on implicit assumptions on the biological processes under study. First, the measurements of mRNA abundance obtained in transcriptomics experiments are taken to be representative of protein concentrations. Second, the observed changes in gene expression are assumed to be solely due to transcription factors and other specific regulators, while changes in the activity of the gene expression machinery and other global physiological effects are neglected. While convenient in practice, these assumptions are often not valid and bias the reverse engineering process. Here we systematically investigate, using a combination of models and experiments, the importance of this bias and possible corrections. We measure in real time and in vivo the activity of genes involved in the FliA-FlgM module of the E. coli motility network. From these data, we estimate protein concentrations and global physiological effects by means of kinetic models of gene expression. Our results indicate that correcting for the bias of commonly-made assumptions improves the quality of the models inferred from the data. Moreover, we show by simulation that these improvements are expected to be even stronger for systems in which protein concentrations have longer half-lives and the activity of the gene expression machinery varies more strongly across conditions than in the FliA-FlgM module. The approach proposed in this study is broadly applicable when using time-series transcriptome data to learn about the structure and dynamics of regulatory networks. In the case of the FliA-FlgM module, our results demonstrate the importance of global physiological effects and the active regulation of FliA and FlgM half-lives for the dynamics of FliA-dependent promoters.

  2. Cumulative Risk and Impact Modeling on Environmental Chemical and Social Stressors.

    PubMed

    Huang, Hongtai; Wang, Aolin; Morello-Frosch, Rachel; Lam, Juleen; Sirota, Marina; Padula, Amy; Woodruff, Tracey J

    2018-03-01

    The goal of this review is to identify cumulative modeling methods used to evaluate combined effects of exposures to environmental chemicals and social stressors. The specific review question is: What are the existing quantitative methods used to examine the cumulative impacts of exposures to environmental chemical and social stressors on health? There has been an increase in literature that evaluates combined effects of exposures to environmental chemicals and social stressors on health using regression models; very few studies applied other data mining and machine learning techniques to this problem. The majority of studies we identified used regression models to evaluate combined effects of multiple environmental and social stressors. With proper study design and appropriate modeling assumptions, additional data mining methods may be useful to examine combined effects of environmental and social stressors.

  3. Protocols for efficient simulations of long-time protein dynamics using coarse-grained CABS model.

    PubMed

    Jamroz, Michal; Kolinski, Andrzej; Kmiecik, Sebastian

    2014-01-01

    Coarse-grained (CG) modeling is a well-acknowledged simulation approach for getting insight into long-time scale protein folding events at reasonable computational cost. Depending on the design of a CG model, the simulation protocols vary from highly case-specific-requiring user-defined assumptions about the folding scenario-to more sophisticated blind prediction methods for which only a protein sequence is required. Here we describe the framework protocol for the simulations of long-term dynamics of globular proteins, with the use of the CABS CG protein model and sequence data. The simulations can start from a random or a selected (e.g., native) structure. The described protocol has been validated using experimental data for protein folding model systems-the prediction results agreed well with the experimental results.

  4. Relative Performance of Rescaling and Resampling Approaches to Model Chi Square and Parameter Standard Error Estimation in Structural Equation Modeling.

    ERIC Educational Resources Information Center

    Nevitt, Johnathan; Hancock, Gregory R.

    Though common structural equation modeling (SEM) methods are predicated upon the assumption of multivariate normality, applied researchers often find themselves with data clearly violating this assumption and without sufficient sample size to use distribution-free estimation methods. Fortunately, promising alternatives are being integrated into…

  5. THE MODELING OF THE FATE AND TRANSPORT OF ENVIRONMENTAL POLLUTANTS

    EPA Science Inventory

    Current models that predict the fate of organic compounds released to the environment are based on the assumption that these compounds exist exclusively as neutral species. This assumption is untrue under many environmental conditions, as some molecules can exist as cations, anio...

  6. An empirical comparison of statistical tests for assessing the proportional hazards assumption of Cox's model.

    PubMed

    Ng'andu, N H

    1997-03-30

    In the analysis of survival data using the Cox proportional hazard (PH) model, it is important to verify that the explanatory variables analysed satisfy the proportional hazard assumption of the model. This paper presents results of a simulation study that compares five test statistics to check the proportional hazard assumption of Cox's model. The test statistics were evaluated under proportional hazards and the following types of departures from the proportional hazard assumption: increasing relative hazards; decreasing relative hazards; crossing hazards; diverging hazards, and non-monotonic hazards. The test statistics compared include those based on partitioning of failure time and those that do not require partitioning of failure time. The simulation results demonstrate that the time-dependent covariate test, the weighted residuals score test and the linear correlation test have equally good power for detection of non-proportionality in the varieties of non-proportional hazards studied. Using illustrative data from the literature, these test statistics performed similarly.

  7. The effect of solution nonideality on modeling transmembrane water transport and diffusion-limited intracellular ice formation during cryopreservation

    NASA Astrophysics Data System (ADS)

    Zhao, Gang; Takamatsu, Hiroshi; He, Xiaoming

    2014-04-01

    A new model was developed to predict transmembrane water transport and diffusion-limited ice formation in cells during freezing without the ideal-solution assumption that has been used in previous models. The model was applied to predict cell dehydration and intracellular ice formation (IIF) during cryopreservation of mouse oocytes and bovine carotid artery endothelial cells in aqueous sodium chloride (NaCl) solution with glycerol as the cryoprotectant or cryoprotective agent. A comparison of the predictions between the present model and the previously reported models indicated that the ideal-solution assumption results in under-prediction of the amount of intracellular ice at slow cooling rates (<50 K/min). In addition, the lower critical cooling rates for IIF that is lethal to cells predicted by the present model were much lower than those estimated with the ideal-solution assumption. This study represents the first investigation on how accounting for solution nonideality in modeling water transport across the cell membrane could affect the prediction of diffusion-limited ice formation in biological cells during freezing. Future studies are warranted to look at other assumptions alongside nonideality to further develop the model as a useful tool for optimizing the protocol of cell cryopreservation for practical applications.

  8. The effect of solution nonideality on modeling transmembrane water transport and diffusion-limited intracellular ice formation during cryopreservation.

    PubMed

    Zhao, Gang; Takamatsu, Hiroshi; He, Xiaoming

    2014-04-14

    A new model was developed to predict transmembrane water transport and diffusion-limited ice formation in cells during freezing without the ideal-solution assumption that has been used in previous models. The model was applied to predict cell dehydration and intracellular ice formation (IIF) during cryopreservation of mouse oocytes and bovine carotid artery endothelial cells in aqueous sodium chloride (NaCl) solution with glycerol as the cryoprotectant or cryoprotective agent. A comparison of the predictions between the present model and the previously reported models indicated that the ideal-solution assumption results in under-prediction of the amount of intracellular ice at slow cooling rates (<50 K/min). In addition, the lower critical cooling rates for IIF that is lethal to cells predicted by the present model were much lower than those estimated with the ideal-solution assumption. This study represents the first investigation on how accounting for solution nonideality in modeling water transport across the cell membrane could affect the prediction of diffusion-limited ice formation in biological cells during freezing. Future studies are warranted to look at other assumptions alongside nonideality to further develop the model as a useful tool for optimizing the protocol of cell cryopreservation for practical applications.

  9. Learning to Predict Combinatorial Structures

    NASA Astrophysics Data System (ADS)

    Vembu, Shankar

    2009-12-01

    The major challenge in designing a discriminative learning algorithm for predicting structured data is to address the computational issues arising from the exponential size of the output space. Existing algorithms make different assumptions to ensure efficient, polynomial time estimation of model parameters. For several combinatorial structures, including cycles, partially ordered sets, permutations and other graph classes, these assumptions do not hold. In this thesis, we address the problem of designing learning algorithms for predicting combinatorial structures by introducing two new assumptions: (i) The first assumption is that a particular counting problem can be solved efficiently. The consequence is a generalisation of the classical ridge regression for structured prediction. (ii) The second assumption is that a particular sampling problem can be solved efficiently. The consequence is a new technique for designing and analysing probabilistic structured prediction models. These results can be applied to solve several complex learning problems including but not limited to multi-label classification, multi-category hierarchical classification, and label ranking.

  10. I Assumed You Knew: Teaching Assumptions as Co-Equal to Observations in Scientific Work

    NASA Astrophysics Data System (ADS)

    Horodyskyj, L.; Mead, C.; Anbar, A. D.

    2016-12-01

    Introductory science curricula typically begin with a lesson on the "nature of science". Usually this lesson is short, built with the assumption that students have picked up this information elsewhere and only a short review is necessary. However, when asked about the nature of science in our classes, student definitions were often confused, contradictory, or incomplete. A cursory review of how the nature of science is defined in a number of textbooks is similarly inconsistent and excessively loquacious. With such confusion both from the student and teacher perspective, it is no surprise that students walk away with significant misconceptions about the scientific endeavor, which they carry with them into public life. These misconceptions subsequently result in poor public policy and personal decisions on issues with scientific underpinnings. We will present a new way of teaching the nature of science at the introductory level that better represents what we actually do as scientists. Nature of science lessons often emphasize the importance of observations in scientific work. However, they rarely mention and often hide the importance of assumptions in interpreting those observations. Assumptions are co-equal to observations in building models, which are observation-assumption networks that can be used to make predictions about future observations. The confidence we place in these models depends on whether they are assumption-dominated (hypothesis) or observation-dominated (theory). By presenting and teaching science in this manner, we feel that students will better comprehend the scientific endeavor, since making observations and assumptions and building mental models is a natural human behavior. We will present a model for a science lab activity that can be taught using this approach.

  11. Adaptive Filtering Using Recurrent Neural Networks

    NASA Technical Reports Server (NTRS)

    Parlos, Alexander G.; Menon, Sunil K.; Atiya, Amir F.

    2005-01-01

    A method for adaptive (or, optionally, nonadaptive) filtering has been developed for estimating the states of complex process systems (e.g., chemical plants, factories, or manufacturing processes at some level of abstraction) from time series of measurements of system inputs and outputs. The method is based partly on the fundamental principles of the Kalman filter and partly on the use of recurrent neural networks. The standard Kalman filter involves an assumption of linearity of the mathematical model used to describe a process system. The extended Kalman filter accommodates a nonlinear process model but still requires linearization about the state estimate. Both the standard and extended Kalman filters involve the often unrealistic assumption that process and measurement noise are zero-mean, Gaussian, and white. In contrast, the present method does not involve any assumptions of linearity of process models or of the nature of process noise; on the contrary, few (if any) assumptions are made about process models, noise models, or the parameters of such models. In this regard, the method can be characterized as one of nonlinear, nonparametric filtering. The method exploits the unique ability of neural networks to approximate nonlinear functions. In a given case, the process model is limited mainly by limitations of the approximation ability of the neural networks chosen for that case. Moreover, despite the lack of assumptions regarding process noise, the method yields minimum- variance filters. In that they do not require statistical models of noise, the neural- network-based state filters of this method are comparable to conventional nonlinear least-squares estimators.

  12. Camera traps and mark-resight models: The value of ancillary data for evaluating assumptions

    USGS Publications Warehouse

    Parsons, Arielle W.; Simons, Theodore R.; Pollock, Kenneth H.; Stoskopf, Michael K.; Stocking, Jessica J.; O'Connell, Allan F.

    2015-01-01

    Unbiased estimators of abundance and density are fundamental to the study of animal ecology and critical for making sound management decisions. Capture–recapture models are generally considered the most robust approach for estimating these parameters but rely on a number of assumptions that are often violated but rarely validated. Mark-resight models, a form of capture–recapture, are well suited for use with noninvasive sampling methods and allow for a number of assumptions to be relaxed. We used ancillary data from continuous video and radio telemetry to evaluate the assumptions of mark-resight models for abundance estimation on a barrier island raccoon (Procyon lotor) population using camera traps. Our island study site was geographically closed, allowing us to estimate real survival and in situ recruitment in addition to population size. We found several sources of bias due to heterogeneity of capture probabilities in our study, including camera placement, animal movement, island physiography, and animal behavior. Almost all sources of heterogeneity could be accounted for using the sophisticated mark-resight models developed by McClintock et al. (2009b) and this model generated estimates similar to a spatially explicit mark-resight model previously developed for this population during our study. Spatially explicit capture–recapture models have become an important tool in ecology and confer a number of advantages; however, non-spatial models that account for inherent individual heterogeneity may perform nearly as well, especially where immigration and emigration are limited. Non-spatial models are computationally less demanding, do not make implicit assumptions related to the isotropy of home ranges, and can provide insights with respect to the biological traits of the local population.

  13. A probabilistic method for the estimation of residual risk in donated blood.

    PubMed

    Bish, Ebru K; Ragavan, Prasanna K; Bish, Douglas R; Slonim, Anthony D; Stramer, Susan L

    2014-10-01

    The residual risk (RR) of transfusion-transmitted infections, including the human immunodeficiency virus and hepatitis B and C viruses, is typically estimated by the incidence[Formula: see text]window period model, which relies on the following restrictive assumptions: Each screening test, with probability 1, (1) detects an infected unit outside of the test's window period; (2) fails to detect an infected unit within the window period; and (3) correctly identifies an infection-free unit. These assumptions need not hold in practice due to random or systemic errors and individual variations in the window period. We develop a probability model that accurately estimates the RR by relaxing these assumptions, and quantify their impact using a published cost-effectiveness study and also within an optimization model. These assumptions lead to inaccurate estimates in cost-effectiveness studies and to sub-optimal solutions in the optimization model. The testing solution generated by the optimization model translates into fewer expected infections without an increase in the testing cost. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  14. Bayesian Sensitivity Analysis of Statistical Models with Missing Data

    PubMed Central

    ZHU, HONGTU; IBRAHIM, JOSEPH G.; TANG, NIANSHENG

    2013-01-01

    Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures. PMID:24753718

  15. The contributions of interpersonal trauma exposure and world assumptions to predicting dissociation in undergraduates.

    PubMed

    Lilly, Michelle M

    2011-01-01

    This study examines the relationship between world assumptions and trauma history in predicting symptoms of dissociation. It was proposed that cognitions related to the safety and benevolence of the world, as well as self-worth, would be related to the presence of dissociative symptoms, the latter of which were theorized to defend against threats to one's sense of safety, meaningfulness, and self-worth. Undergraduates from a midwestern university completed the Multiscale Dissociation Inventory, World Assumptions Scale, and Traumatic Life Events Questionnaire. Consistent with the hypotheses, world assumptions were related to the extent of trauma exposure and interpersonal trauma exposure in the sample but were not significantly related to non-interpersonal trauma exposure. World assumptions acted as a significant partial mediator of the relationship between trauma exposure and dissociation, and this relationship held when interpersonal trauma exposure specifically was considered. The factor structures of dissociation and world assumptions were also examined using principal component analysis, with the benevolence and self-worth factors of the World Assumptions Scale showing the strongest relationships with trauma exposure and dissociation. Clinical implications are discussed.

  16. A framework for the use of single-chemical transcriptomics data in predicting the hazards associated with complex mixtures of polycyclic aromatic hydrocarbons.

    PubMed

    Labib, Sarah; Williams, Andrew; Kuo, Byron; Yauk, Carole L; White, Paul A; Halappanavar, Sabina

    2017-07-01

    The assumption of additivity applied in the risk assessment of environmental mixtures containing carcinogenic polycyclic aromatic hydrocarbons (PAHs) was investigated using transcriptomics. MutaTMMouse were gavaged for 28 days with three doses of eight individual PAHs, two defined mixtures of PAHs, or coal tar, an environmentally ubiquitous complex mixture of PAHs. Microarrays were used to identify differentially expressed genes (DEGs) in lung tissue collected 3 days post-exposure. Cancer-related pathways perturbed by the individual or mixtures of PAHs were identified, and dose-response modeling of the DEGs was conducted to calculate gene/pathway benchmark doses (BMDs). Individual PAH-induced pathway perturbations (the median gene expression changes for all genes in a pathway relative to controls) and pathway BMDs were applied to models of additivity [i.e., concentration addition (CA), generalized concentration addition (GCA), and independent action (IA)] to generate predicted pathway-specific dose-response curves for each PAH mixture. The predicted and observed pathway dose-response curves were compared to assess the sensitivity of different additivity models. Transcriptomics-based additivity calculation showed that IA accurately predicted the pathway perturbations induced by all mixtures of PAHs. CA did not support the additivity assumption for the defined mixtures; however, GCA improved the CA predictions. Moreover, pathway BMDs derived for coal tar were comparable to BMDs derived from previously published coal tar-induced mouse lung tumor incidence data. These results suggest that in the absence of tumor incidence data, individual chemical-induced transcriptomics changes associated with cancer can be used to investigate the assumption of additivity and to predict the carcinogenic potential of a mixture.

  17. Simultaneous inference of phylogenetic and transmission trees in infectious disease outbreaks

    PubMed Central

    2017-01-01

    Whole-genome sequencing of pathogens from host samples becomes more and more routine during infectious disease outbreaks. These data provide information on possible transmission events which can be used for further epidemiologic analyses, such as identification of risk factors for infectivity and transmission. However, the relationship between transmission events and sequence data is obscured by uncertainty arising from four largely unobserved processes: transmission, case observation, within-host pathogen dynamics and mutation. To properly resolve transmission events, these processes need to be taken into account. Recent years have seen much progress in theory and method development, but existing applications make simplifying assumptions that often break up the dependency between the four processes, or are tailored to specific datasets with matching model assumptions and code. To obtain a method with wider applicability, we have developed a novel approach to reconstruct transmission trees with sequence data. Our approach combines elementary models for transmission, case observation, within-host pathogen dynamics, and mutation, under the assumption that the outbreak is over and all cases have been observed. We use Bayesian inference with MCMC for which we have designed novel proposal steps to efficiently traverse the posterior distribution, taking account of all unobserved processes at once. This allows for efficient sampling of transmission trees from the posterior distribution, and robust estimation of consensus transmission trees. We implemented the proposed method in a new R package phybreak. The method performs well in tests of both new and published simulated data. We apply the model to five datasets on densely sampled infectious disease outbreaks, covering a wide range of epidemiological settings. Using only sampling times and sequences as data, our analyses confirmed the original results or improved on them: the more realistic infection times place more confidence in the inferred transmission trees. PMID:28545083

  18. Simultaneous inference of phylogenetic and transmission trees in infectious disease outbreaks.

    PubMed

    Klinkenberg, Don; Backer, Jantien A; Didelot, Xavier; Colijn, Caroline; Wallinga, Jacco

    2017-05-01

    Whole-genome sequencing of pathogens from host samples becomes more and more routine during infectious disease outbreaks. These data provide information on possible transmission events which can be used for further epidemiologic analyses, such as identification of risk factors for infectivity and transmission. However, the relationship between transmission events and sequence data is obscured by uncertainty arising from four largely unobserved processes: transmission, case observation, within-host pathogen dynamics and mutation. To properly resolve transmission events, these processes need to be taken into account. Recent years have seen much progress in theory and method development, but existing applications make simplifying assumptions that often break up the dependency between the four processes, or are tailored to specific datasets with matching model assumptions and code. To obtain a method with wider applicability, we have developed a novel approach to reconstruct transmission trees with sequence data. Our approach combines elementary models for transmission, case observation, within-host pathogen dynamics, and mutation, under the assumption that the outbreak is over and all cases have been observed. We use Bayesian inference with MCMC for which we have designed novel proposal steps to efficiently traverse the posterior distribution, taking account of all unobserved processes at once. This allows for efficient sampling of transmission trees from the posterior distribution, and robust estimation of consensus transmission trees. We implemented the proposed method in a new R package phybreak. The method performs well in tests of both new and published simulated data. We apply the model to five datasets on densely sampled infectious disease outbreaks, covering a wide range of epidemiological settings. Using only sampling times and sequences as data, our analyses confirmed the original results or improved on them: the more realistic infection times place more confidence in the inferred transmission trees.

  19. Rationality as the Basic Assumption in Explaining Japanese (or Any Other) Business Culture.

    ERIC Educational Resources Information Center

    Koike, Shohei

    Economic analysis, with its explicit assumption that people are rational, is applied to the Japanese and American business cultures to illustrate how the approach is useful for understanding cultural differences. Specifically, differences in cooperative behavior among Japanese and American workers are examined. Economic analysis goes beyond simple…

  20. Formalization and Analysis of Reasoning by Assumption

    ERIC Educational Resources Information Center

    Bosse, Tibor; Jonker, Catholijn M.; Treur, Jan

    2006-01-01

    This article introduces a novel approach for the analysis of the dynamics of reasoning processes and explores its applicability for the reasoning pattern called reasoning by assumption. More specifically, for a case study in the domain of a Master Mind game, it is shown how empirical human reasoning traces can be formalized and automatically…

  1. "What You See Is [Not Always] What You Get!" Dispelling Race and Gender Leadership Assumptions

    ERIC Educational Resources Information Center

    Reed, Latish; Evans, Andrea E.

    2008-01-01

    Race and gender affect the way in which African-American female principals perceive and enact their roles in predominantly African-American urban schools. Using empirical data drawn from a larger qualitative study, this article examines and challenges racial and gendered assumptions about African-American leadership, and specifically American…

  2. Biological control agents elevate hantavirus by subsidizing deer mouse populations

    Treesearch

    Dean E. Pearson; Ragan M. Callaway

    2006-01-01

    Biological control of exotic invasive plants using exotic insects is practiced under the assumption that biological control agents are safe if they do not directly attack non-target species. We tested this assumption by evaluating the potential for two host-specific biological control agents (Urophora spp.), widely established in North America for spotted...

  3. The Relationship between Checklist Scores on a Communication OSCE and Analogue Patients' Perceptions of Communication

    ERIC Educational Resources Information Center

    Mazor, Kathleen M.; Ockene, Judith K.; Rogers, H. Jane; Carlin, Michele M.; Quirk, Mark E.

    2005-01-01

    Many efforts to teach and evaluate physician-patient communication are based on two assumptions: first, that communication can be conceptualized as consisting of specific observable behaviors, and second, that physicians who exhibit certain behaviors are more effective in communicating with patients. These assumptions are usually implicit, and are…

  4. When approved is not enough: development of a supervision consultation model.

    PubMed

    Green, S; Shilts, L; Bacigalupe, G

    2001-10-01

    The dramatic increase in the literature that addresses family therapy training and supervision over the last decade has been predominantly in the area of theory, rather than practice. This article describes the development of a meta-supervisory learning context for approved supervisors and provides examples of interactions between supervisors that subsequently influenced both therapy and supervision. We delineate the assumptions that inform our work and offer specific guidelines for supervisors who wish to implement a similar model in their own contexts. We provide suggestions for a proactive refiguring of supervision that may have profound effects and benefits for supervisors and supervisees alike.

  5. Population modeling and its role in toxicological studies

    USGS Publications Warehouse

    Sauer, John R.; Pendleton, Grey W.; Hoffman, David J.; Rattner, Barnett A.; Burton, G. Allen; Cairns, John

    1995-01-01

    A model could be defined as any abstraction from reality that is used to provide some insight into the real system. In this discussion, we will use a more specific definition that a model is a set of rules or assumptions, expressed as mathematical equations, that describe how animals survive and reproduce, including the external factors that affect these characteristics. A model simplifies a system, retaining essential components while eliminating parts that are not of interest. ecology has a rich history of using models to gain insight into populations, often borrowing both model structures and analysis methods from demographers and engineers. Much of the development of the models has been a consequence of mathematicians and physicists seeing simple analogies between their models and patterns in natural systems. Consequently, one major application of ecological modeling has been to emphasize the analysis of dynamics of often complex models to provide insight into theoretical aspects of ecology.1

  6. Development and Validation of Methodology to Model Flow in Ventilation Systems Commonly Found in Nuclear Facilities. Phase I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strons, Philip; Bailey, James L.; Davis, John

    2016-03-01

    In this work, we apply the CFD in modeling airflow and particulate transport. This modeling is then compared to field validation studies to both inform and validate the modeling assumptions. Based on the results of field tests, modeling assumptions and boundary conditions are refined and the process is repeated until the results are found to be reliable with a high level of confidence.

  7. Post audit of a numerical prediction of wellfield drawdown in a semiconfined aquifer system

    USGS Publications Warehouse

    Stewart, M.; Langevin, C.

    1999-01-01

    A numerical ground water flow model was created in 1978 and revised in 1981 to predict the drawdown effects of a proposed municipal wellfield permitted to withdraw 30 million gallons per day (mgd; 1.1 x 105 m3/day) of water from the semiconfined Floridan Aquifer system. The predictions are based on the assumption that water levels in the semiconfined Floridan Aquifer reach a long-term, steady-state condition within a few days of initiation of pumping. Using this assumption, a 75 day simulation without water table recharge, pumping at the maximum permitted rates, was considered to represent a worst-case condition and the greatest drawdowns that could be experienced during wellfield operation. This method of predicting wellfield effects was accepted by the permitting agency. For this post audit, observed drawdowns were derived by taking the difference between pre-pumping and post-pumping potentiometric surface levels. Comparison of predicted and observed drawdowns suggests that actual drawdown over a 12 year period exceeds predicted drawdown by a factor of two or more. Analysis of the source of error in the 1981 predictions suggests that the values used for transmissivity, storativity, specific yield, and leakance are reasonable at the wellfield scale. Simulation using actual 1980-1992 pumping rates improves the agreement between predicted and observed drawdowns. The principal source of error is the assumption that water levels in a semiconfined aquifer achieve a steady-state condition after a few days or weeks of pumping. Simulations using a version of the 1981 model modified to include recharge and evapotranspiration suggest that it can take hundreds of days or several years for water levels in the linked Surficial and Floridan Aquifers to reach an apparent steady-state condition, and that slow declines in levels continue for years after the initiation of pumping. While the 1981 'impact' model can be used for reasonably predicting short-term, wellfield-scale effects of pumping, using a 75 day long simulation without recharge to predict the long-term behavior of the wellfield was an inappropriate application, resulting in significant underprediction of wellfield effects.A numerical ground water flow model was created in 1978 and revised in 1981 to predict the drawdown effects of a proposed municipal wellfield permitted to withdraw 30 million gallons per day (mgd; 1.1??105 m3/day) of water from the semiconfined Floridan Aquifer system. The predictions are based on the assumption that water levels in the semiconfined Floridan Aquifer reach a long-term, steady-state condition within a few days of initiation of pumping. Using this assumption, a 75 day simulation without water table recharge, pumping at the maximum permitted rates, was considered to represent a worst-case condition and the greatest drawdowns that could be experienced during wellfield operation. This method of predicting wellfield effects was accepted by the permitting agency. For this post audit, observed drawdowns were derived by taking the difference between pre-pumping and post-pumping potentiometric surface levels. Comparison of predicted and observed drawdowns suggests that actual drawdown over a 12 year period exceeds predicted drawdown by a factor of two or more. Analysis of the source of error in the 1981 predictions suggests that the values used for transmissivity, storativity, specific yield, and leakance are reasonable at the wellfield scale. Simulation using actual 1980-1992 pumping rates improves the agreement between predicted and observed drawdowns. The principal source of error is the assumption that water levels in a semiconfined aquifer achieve a steady-state condition after a few days or weeks of pumping. Simulations using a version of the 1981 model modified to include recharge and evapotranspiration suggest that it can take hundreds of days or several years for water levels in the linked Surficial and Floridan Aquifers to reach an apparent stead

  8. An epidemic model to evaluate the homogeneous mixing assumption

    NASA Astrophysics Data System (ADS)

    Turnes, P. P.; Monteiro, L. H. A.

    2014-11-01

    Many epidemic models are written in terms of ordinary differential equations (ODE). This approach relies on the homogeneous mixing assumption; that is, the topological structure of the contact network established by the individuals of the host population is not relevant to predict the spread of a pathogen in this population. Here, we propose an epidemic model based on ODE to study the propagation of contagious diseases conferring no immunity. The state variables of this model are the percentages of susceptible individuals, infectious individuals and empty space. We show that this dynamical system can experience transcritical and Hopf bifurcations. Then, we employ this model to evaluate the validity of the homogeneous mixing assumption by using real data related to the transmission of gonorrhea, hepatitis C virus, human immunodeficiency virus, and obesity.

  9. Fun with maths: exploring implications of mathematical models for malaria eradication.

    PubMed

    Eckhoff, Philip A; Bever, Caitlin A; Gerardin, Jaline; Wenger, Edward A

    2014-12-11

    Mathematical analyses and modelling have an important role informing malaria eradication strategies. Simple mathematical approaches can answer many questions, but it is important to investigate their assumptions and to test whether simple assumptions affect the results. In this note, four examples demonstrate both the effects of model structures and assumptions and also the benefits of using a diversity of model approaches. These examples include the time to eradication, the impact of vaccine efficacy and coverage, drug programs and the effects of duration of infections and delays to treatment, and the influence of seasonality and migration coupling on disease fadeout. An excessively simple structure can miss key results, but simple mathematical approaches can still achieve key results for eradication strategy and define areas for investigation by more complex models.

  10. Modeling Imperfect Generator Behavior in Power System Operation Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krad, Ibrahim

    A key component in power system operations is the use of computer models to quickly study and analyze different operating conditions and futures in an efficient manner. The output of these models are sensitive to the data used in them as well as the assumptions made during their execution. One typical assumption is that generators and load assets perfectly follow operator control signals. While this is a valid simulation assumption, generators may not always accurately follow control signals. This imperfect response of generators could impact cost and reliability metrics. This paper proposes a generator model that capture this imperfect behaviormore » and examines its impact on production costs and reliability metrics using a steady-state power system operations model. Preliminary analysis shows that while costs remain relatively unchanged, there could be significant impacts on reliability metrics.« less

  11. Application of random survival forests in understanding the determinants of under-five child mortality in Uganda in the presence of covariates that satisfy the proportional and non-proportional hazards assumption.

    PubMed

    Nasejje, Justine B; Mwambi, Henry

    2017-09-07

    Uganda just like any other Sub-Saharan African country, has a high under-five child mortality rate. To inform policy on intervention strategies, sound statistical methods are required to critically identify factors strongly associated with under-five child mortality rates. The Cox proportional hazards model has been a common choice in analysing data to understand factors strongly associated with high child mortality rates taking age as the time-to-event variable. However, due to its restrictive proportional hazards (PH) assumption, some covariates of interest which do not satisfy the assumption are often excluded in the analysis to avoid mis-specifying the model. Otherwise using covariates that clearly violate the assumption would mean invalid results. Survival trees and random survival forests are increasingly becoming popular in analysing survival data particularly in the case of large survey data and could be attractive alternatives to models with the restrictive PH assumption. In this article, we adopt random survival forests which have never been used in understanding factors affecting under-five child mortality rates in Uganda using Demographic and Health Survey data. Thus the first part of the analysis is based on the use of the classical Cox PH model and the second part of the analysis is based on the use of random survival forests in the presence of covariates that do not necessarily satisfy the PH assumption. Random survival forests and the Cox proportional hazards model agree that the sex of the household head, sex of the child, number of births in the past 1 year are strongly associated to under-five child mortality in Uganda given all the three covariates satisfy the PH assumption. Random survival forests further demonstrated that covariates that were originally excluded from the earlier analysis due to violation of the PH assumption were important in explaining under-five child mortality rates. These covariates include the number of children under the age of five in a household, number of births in the past 5 years, wealth index, total number of children ever born and the child's birth order. The results further indicated that the predictive performance for random survival forests built using covariates including those that violate the PH assumption was higher than that for random survival forests built using only covariates that satisfy the PH assumption. Random survival forests are appealing methods in analysing public health data to understand factors strongly associated with under-five child mortality rates especially in the presence of covariates that violate the proportional hazards assumption.

  12. Repeatable source, site, and path effects on the standard deviation for empirical ground-motion prediction models

    USGS Publications Warehouse

    Lin, P.-S.; Chiou, B.; Abrahamson, N.; Walling, M.; Lee, C.-T.; Cheng, C.-T.

    2011-01-01

    In this study, we quantify the reduction in the standard deviation for empirical ground-motion prediction models by removing ergodic assumption.We partition the modeling error (residual) into five components, three of which represent the repeatable source-location-specific, site-specific, and path-specific deviations from the population mean. A variance estimation procedure of these error components is developed for use with a set of recordings from earthquakes not heavily clustered in space.With most source locations and propagation paths sampled only once, we opt to exploit the spatial correlation of residuals to estimate the variances associated with the path-specific and the source-location-specific deviations. The estimation procedure is applied to ground-motion amplitudes from 64 shallow earthquakes in Taiwan recorded at 285 sites with at least 10 recordings per site. The estimated variance components are used to quantify the reduction in aleatory variability that can be used in hazard analysis for a single site and for a single path. For peak ground acceleration and spectral accelerations at periods of 0.1, 0.3, 0.5, 1.0, and 3.0 s, we find that the singlesite standard deviations are 9%-14% smaller than the total standard deviation, whereas the single-path standard deviations are 39%-47% smaller.

  13. Measuring and modeling C flux rates through the central metabolic pathways in microbial communities using position-specific 13C-labeled tracers

    NASA Astrophysics Data System (ADS)

    Dijkstra, P.; van Groenigen, K.; Hagerty, S.; Salpas, E.; Fairbanks, D. E.; Hungate, B. A.; KOCH, G. W.; Schwartz, E.

    2012-12-01

    The production of energy and metabolic precursors occurs in well-known processes such as glycolysis and Krebs cycle. We use position-specific 13C-labeled metabolic tracers, combined with models of microbial metabolic organization, to analyze the response of microbial community energy production, biosynthesis, and C use efficiency (CUE) in soils, decomposing litter, and aquatic communities. The method consists of adding position-specific 13C -labeled metabolic tracers to parallel soil incubations, in this case 1-13C and 2,3-13C pyruvate and 1-13C and U-13C glucose. The measurement of CO2 released from the labeled tracers is used to calculate the C flux rates through the various metabolic pathways. A simplified metabolic model consisting of 23 reactions is solved using results of the metabolic tracer experiments and assumptions of microbial precursor demand. This new method enables direct estimation of fundamental aspects of microbial energy production, CUE, and soil organic matter formation in relatively undisturbed microbial communities. We will present results showing the range of metabolic patterns observed in these communities and discuss results from testing metabolic models.

  14. Thresholds of understanding: Exploring assumptions of scale invariance vs. scale dependence in global biogeochemical models

    NASA Astrophysics Data System (ADS)

    Wieder, W. R.; Bradford, M.; Koven, C.; Talbot, J. M.; Wood, S.; Chadwick, O.

    2016-12-01

    High uncertainty and low confidence in terrestrial carbon (C) cycle projections reflect the incomplete understanding of how best to represent biologically-driven C cycle processes at global scales. Ecosystem theories, and consequently biogeochemical models, are based on the assumption that different belowground communities function similarly and interact with the abiotic environment in consistent ways. This assumption of "Scale Invariance" posits that environmental conditions will change the rate of ecosystem processes, but the biotic response will be consistent across sites. Indeed, cross-site comparisons and global-scale analyses suggest that climate strongly controls rates of litter mass loss and soil organic matter turnover. Alternatively, activities of belowground communities are shaped by particular local environmental conditions, such as climate and edaphic conditions. Under this assumption of "Scale Dependence", relationships generated by evolutionary trade-offs in acquiring resources and withstanding environmental stress dictate the activities of belowground communities and their functional response to environmental change. Similarly, local edaphic conditions (e.g. permafrost soils or reactive minerals that physicochemically stabilize soil organic matter on mineral surfaces) may strongly constrain the availability of substrates that biota decompose—altering the trajectory of soil biogeochemical response to perturbations. Identifying when scale invariant assumptions hold vs. where local variation in biotic communities or edaphic conditions must be considered is critical to advancing our understanding and representation of belowground processes in the face of environmental change. Here we introduce data sets that support assumptions of scale invariance and scale dependent processes and discuss their application in global-scale biogeochemical models. We identify particular domains over which assumptions of scale invariance may be appropriate and potential thresholds where shifts in ecosystem function may be expected. Finally, we discuss the mechanistic insight that can be applied in process-based models and datasets that can evaluate models across spatial and temporal scales.

  15. [Life cycle strategies: a synthesis of empirical and theoretical approaches].

    PubMed

    Romanovskiĭ, Iu E

    1998-01-01

    A scheme of relationships among life-history characters is developed on assumptions of determinate growth and dependence of juvenile mortality on the specific growth rate. It is shown that constraints on the relative neonate size, (W0/W infinity), and minimum value of the biotic potential, (rmax), lead to "triangular" shape of life history set on the plain defined by juvenile and adult mortality. This completely coincides with the Ramenskiĭ++--Grime (C-S-R) classification of life-history strategies. Phylogenetic constraints can reduce this set to a relatively narŕow r/K-continuum specifically oriented for a certain taxon. Similar restrictions generate models of life history optimization which predict interspecific allometries between life-history traits.

  16. Sensitivity experiments of a regional climate model to the different convective schemes over Central Africa

    NASA Astrophysics Data System (ADS)

    Armand J, K. M.

    2017-12-01

    In this study, version 4 of the regional climate model (RegCM4) is used to perform 6 years simulation including one year for spin-up (from January 2001 to December 2006) over Central Africa using four convective schemes: The Emmanuel scheme (MIT), the Grell scheme with Arakawa-Schulbert closure assumption (GAS), the Grell scheme with Fritsch-Chappell closure assumption (GFC) and the Anthes-Kuo scheme (Kuo). We have investigated the ability of the model to simulate precipitation, surface temperature, wind and aerosols optical depth. Emphasis in the model results were made in December-January-February (DJF) and July-August-September (JAS) periods. Two subregions have been identified for more specific analysis namely: zone 1 which corresponds to the sahel region mainly classified as desert and steppe and zone 2 which is a region spanning the tropical rain forest and is characterised by a bimodal rain regime. We found that regardless of periods or simulated parameters, MIT scheme generally has a tendency to overestimate. The GAS scheme is more suitable in simulating the aforementioned parameters, as well as the diurnal cycle of precipitations everywhere over the study domain irrespective of the season. In JAS, model results are similar in the representation of regional wind circulation. Apart from the MIT scheme, all the convective schemes give the same trends in aerosols optical depth simulations. Additional experiment reveals that the use of BATS instead of Zeng scheme to calculate ocean flux appears to improve the quality of the model simulations.

  17. LS³: A Method for Improving Phylogenomic Inferences When Evolutionary Rates Are Heterogeneous among Taxa.

    PubMed

    Rivera-Rivera, Carlos J; Montoya-Burgos, Juan I

    2016-06-01

    Phylogenetic inference artifacts can occur when sequence evolution deviates from assumptions made by the models used to analyze them. The combination of strong model assumption violations and highly heterogeneous lineage evolutionary rates can become problematic in phylogenetic inference, and lead to the well-described long-branch attraction (LBA) artifact. Here, we define an objective criterion for assessing lineage evolutionary rate heterogeneity among predefined lineages: the result of a likelihood ratio test between a model in which the lineages evolve at the same rate (homogeneous model) and a model in which different lineage rates are allowed (heterogeneous model). We implement this criterion in the algorithm Locus Specific Sequence Subsampling (LS³), aimed at reducing the effects of LBA in multi-gene datasets. For each gene, LS³ sequentially removes the fastest-evolving taxon of the ingroup and tests for lineage rate homogeneity until all lineages have uniform evolutionary rates. The sequences excluded from the homogeneously evolving taxon subset are flagged as potentially problematic. The software implementation provides the user with the possibility to remove the flagged sequences for generating a new concatenated alignment. We tested LS³ with simulations and two real datasets containing LBA artifacts: a nucleotide dataset regarding the position of Glires within mammals and an amino-acid dataset concerning the position of nematodes within bilaterians. The initially incorrect phylogenies were corrected in all cases upon removing data flagged by LS³. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  18. Evaluation of machine learning algorithms for prediction of regions of high Reynolds averaged Navier Stokes uncertainty

    DOE PAGES

    Ling, Julia; Templeton, Jeremy Alan

    2015-08-04

    Reynolds Averaged Navier Stokes (RANS) models are widely used in industry to predict fluid flows, despite their acknowledged deficiencies. Not only do RANS models often produce inaccurate flow predictions, but there are very limited diagnostics available to assess RANS accuracy for a given flow configuration. If experimental or higher fidelity simulation results are not available for RANS validation, there is no reliable method to evaluate RANS accuracy. This paper explores the potential of utilizing machine learning algorithms to identify regions of high RANS uncertainty. Three different machine learning algorithms were evaluated: support vector machines, Adaboost decision trees, and random forests.more » The algorithms were trained on a database of canonical flow configurations for which validated direct numerical simulation or large eddy simulation results were available, and were used to classify RANS results on a point-by-point basis as having either high or low uncertainty, based on the breakdown of specific RANS modeling assumptions. Classifiers were developed for three different basic RANS eddy viscosity model assumptions: the isotropy of the eddy viscosity, the linearity of the Boussinesq hypothesis, and the non-negativity of the eddy viscosity. It is shown that these classifiers are able to generalize to flows substantially different from those on which they were trained. As a result, feature selection techniques, model evaluation, and extrapolation detection are discussed in the context of turbulence modeling applications.« less

  19. Using Ecosystem Experiments to Improve Vegetation Models

    DOE PAGES

    Medlyn, Belinda; Zaehle, S; DeKauwe, Martin G.; ...

    2015-05-21

    Ecosystem responses to rising CO2 concentrations are a major source of uncertainty in climate change projections. Data from ecosystem-scale Free-Air CO2 Enrichment (FACE) experiments provide a unique opportunity to reduce this uncertainty. The recent FACE Model–Data Synthesis project aimed to use the information gathered in two forest FACE experiments to assess and improve land ecosystem models. A new 'assumption-centred' model intercomparison approach was used, in which participating models were evaluated against experimental data based on the ways in which they represent key ecological processes. Identifying and evaluating the main assumptions caused differences among models, and the assumption-centered approach produced amore » clear roadmap for reducing model uncertainty. We explain this approach and summarize the resulting research agenda. We encourage the application of this approach in other model intercomparison projects to fundamentally improve predictive understanding of the Earth system.« less

  20. Validation of abundance estimates from mark-recapture and removal techniques for rainbow trout captured by electrofishing in small streams

    Treesearch

    Amanda E. Rosenberger; Jason B. Dunham

    2005-01-01

    Estimation of fish abundance in streams using the removal model or the Lincoln–Peterson mark–recapture model is a common practice in fisheries. These models produce misleading results if their assumptions are violated. We evaluated the assumptions of these two models via electrofishing of rainbow trout Oncorhynchus mykiss in central Idaho streams....

  1. Taking the Missing Propensity Into Account When Estimating Competence Scores

    PubMed Central

    Pohl, Steffi; Carstensen, Claus H.

    2014-01-01

    When competence tests are administered, subjects frequently omit items. These missing responses pose a threat to correctly estimating the proficiency level. Newer model-based approaches aim to take nonignorable missing data processes into account by incorporating a latent missing propensity into the measurement model. Two assumptions are typically made when using these models: (1) The missing propensity is unidimensional and (2) the missing propensity and the ability are bivariate normally distributed. These assumptions may, however, be violated in real data sets and could, thus, pose a threat to the validity of this approach. The present study focuses on modeling competencies in various domains, using data from a school sample (N = 15,396) and an adult sample (N = 7,256) from the National Educational Panel Study. Our interest was to investigate whether violations of unidimensionality and the normal distribution assumption severely affect the performance of the model-based approach in terms of differences in ability estimates. We propose a model with a competence dimension, a unidimensional missing propensity and a distributional assumption more flexible than a multivariate normal. Using this model for ability estimation results in different ability estimates compared with a model ignoring missing responses. Implications for ability estimation in large-scale assessments are discussed. PMID:29795844

  2. Partitioning uncertainty in streamflow projections under nonstationary model conditions

    NASA Astrophysics Data System (ADS)

    Chawla, Ila; Mujumdar, P. P.

    2018-02-01

    Assessing the impacts of Land Use (LU) and climate change on future streamflow projections is necessary for efficient management of water resources. However, model projections are burdened with significant uncertainty arising from various sources. Most of the previous studies have considered climate models and scenarios as major sources of uncertainty, but uncertainties introduced by land use change and hydrologic model assumptions are rarely investigated. In this paper an attempt is made to segregate the contribution from (i) general circulation models (GCMs), (ii) emission scenarios, (iii) land use scenarios, (iv) stationarity assumption of the hydrologic model, and (v) internal variability of the processes, to overall uncertainty in streamflow projections using analysis of variance (ANOVA) approach. Generally, most of the impact assessment studies are carried out with unchanging hydrologic model parameters in future. It is, however, necessary to address the nonstationarity in model parameters with changing land use and climate. In this paper, a regression based methodology is presented to obtain the hydrologic model parameters with changing land use and climate scenarios in future. The Upper Ganga Basin (UGB) in India is used as a case study to demonstrate the methodology. The semi-distributed Variable Infiltration Capacity (VIC) model is set-up over the basin, under nonstationary conditions. Results indicate that model parameters vary with time, thereby invalidating the often-used assumption of model stationarity. The streamflow in UGB under the nonstationary model condition is found to reduce in future. The flows are also found to be sensitive to changes in land use. Segregation results suggest that model stationarity assumption and GCMs along with their interactions with emission scenarios, act as dominant sources of uncertainty. This paper provides a generalized framework for hydrologists to examine stationarity assumption of models before considering them for future streamflow projections and segregate the contribution of various sources to the uncertainty.

  3. Regression assumptions in clinical psychology research practice-a systematic review of common misconceptions.

    PubMed

    Ernst, Anja F; Albers, Casper J

    2017-01-01

    Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our systematic literature review investigated employment and reporting of assumption checks in twelve clinical psychology journals. Findings indicate that normality of the variables themselves, rather than of the errors, was wrongfully held for a necessary assumption in 4% of papers that use regression. Furthermore, 92% of all papers using linear regression were unclear about their assumption checks, violating APA-recommendations. This paper appeals for a heightened awareness for and increased transparency in the reporting of statistical assumption checking.

  4. Regression assumptions in clinical psychology research practice—a systematic review of common misconceptions

    PubMed Central

    Ernst, Anja F.

    2017-01-01

    Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our systematic literature review investigated employment and reporting of assumption checks in twelve clinical psychology journals. Findings indicate that normality of the variables themselves, rather than of the errors, was wrongfully held for a necessary assumption in 4% of papers that use regression. Furthermore, 92% of all papers using linear regression were unclear about their assumption checks, violating APA-recommendations. This paper appeals for a heightened awareness for and increased transparency in the reporting of statistical assumption checking. PMID:28533971

  5. Advanced space power requirements and techniques. Task 1: Mission projections and requirements. Volume 3: Appendices. [cost estimates and computer programs

    NASA Technical Reports Server (NTRS)

    Wolfe, M. G.

    1978-01-01

    Contents: (1) general study guidelines and assumptions; (2) launch vehicle performance and cost assumptions; (3) satellite programs 1959 to 1979; (4) initiative mission and design characteristics; (5) satellite listing; (6) spacecraft design model; (7) spacecraft cost model; (8) mission cost model; and (9) nominal and optimistic budget program cost summaries.

  6. A Conditional Joint Modeling Approach for Locally Dependent Item Responses and Response Times

    ERIC Educational Resources Information Center

    Meng, Xiang-Bin; Tao, Jian; Chang, Hua-Hua

    2015-01-01

    The assumption of conditional independence between the responses and the response times (RTs) for a given person is common in RT modeling. However, when the speed of a test taker is not constant, this assumption will be violated. In this article we propose a conditional joint model for item responses and RTs, which incorporates a covariance…

  7. The Combined Effects of Measurement Error and Omitting Confounders in the Single-Mediator Model

    PubMed Central

    Fritz, Matthew S.; Kenny, David A.; MacKinnon, David P.

    2016-01-01

    Mediation analysis requires a number of strong assumptions be met in order to make valid causal inferences. Failing to account for violations of these assumptions, such as not modeling measurement error or omitting a common cause of the effects in the model, can bias the parameter estimates of the mediated effect. When the independent variable is perfectly reliable, for example when participants are randomly assigned to levels of treatment, measurement error in the mediator tends to underestimate the mediated effect, while the omission of a confounding variable of the mediator to outcome relation tends to overestimate the mediated effect. Violations of these two assumptions often co-occur, however, in which case the mediated effect could be overestimated, underestimated, or even, in very rare circumstances, unbiased. In order to explore the combined effect of measurement error and omitted confounders in the same model, the impact of each violation on the single-mediator model is first examined individually. Then the combined effect of having measurement error and omitted confounders in the same model is discussed. Throughout, an empirical example is provided to illustrate the effect of violating these assumptions on the mediated effect. PMID:27739903

  8. Comparison of 2D Finite Element Modeling Assumptions with Results From 3D Analysis for Composite Skin-Stiffener Debonding

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Paris, Isbelle L.; OBrien, T. Kevin; Minguet, Pierre J.

    2004-01-01

    The influence of two-dimensional finite element modeling assumptions on the debonding prediction for skin-stiffener specimens was investigated. Geometrically nonlinear finite element analyses using two-dimensional plane-stress and plane-strain elements as well as three different generalized plane strain type approaches were performed. The computed skin and flange strains, transverse tensile stresses and energy release rates were compared to results obtained from three-dimensional simulations. The study showed that for strains and energy release rate computations the generalized plane strain assumptions yielded results closest to the full three-dimensional analysis. For computed transverse tensile stresses the plane stress assumption gave the best agreement. Based on this study it is recommended that results from plane stress and plane strain models be used as upper and lower bounds. The results from generalized plane strain models fall between the results obtained from plane stress and plane strain models. Two-dimensional models may also be used to qualitatively evaluate the stress distribution in a ply and the variation of energy release rates and mixed mode ratios with delamination length. For more accurate predictions, however, a three-dimensional analysis is required.

  9. Influence of 2D Finite Element Modeling Assumptions on Debonding Prediction for Composite Skin-stiffener Specimens Subjected to Tension and Bending

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Minguet, Pierre J.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The influence of two-dimensional finite element modeling assumptions on the debonding prediction for skin-stiffener specimens was investigated. Geometrically nonlinear finite element analyses using two-dimensional plane-stress and plane strain elements as well as three different generalized plane strain type approaches were performed. The computed deflections, skin and flange strains, transverse tensile stresses and energy release rates were compared to results obtained from three-dimensional simulations. The study showed that for strains and energy release rate computations the generalized plane strain assumptions yielded results closest to the full three-dimensional analysis. For computed transverse tensile stresses the plane stress assumption gave the best agreement. Based on this study it is recommended that results from plane stress and plane strain models be used as upper and lower bounds. The results from generalized plane strain models fall between the results obtained from plane stress and plane strain models. Two-dimensional models may also be used to qualitatively evaluate the stress distribution in a ply and the variation of energy release rates and mixed mode ratios with lamination length. For more accurate predictions, however, a three-dimensional analysis is required.

  10. Willingness-to-pay for steelhead trout fishing: Implications of two-step consumer decisions with short-run endowments

    NASA Astrophysics Data System (ADS)

    McKean, John R.; Johnson, Donn; Taylor, R. Garth

    2010-09-01

    Choice of the appropriate model of economic behavior is important for the measurement of nonmarket demand and benefits. Several travel cost demand model specifications are currently in use. Uncertainty exists over the efficacy of these approaches, and more theoretical and empirical study is warranted. Thus travel cost models with differing assumptions about labor markets and consumer behavior were applied to estimate the demand for steelhead trout sportfishing on an unimpounded reach of the Snake River near Lewiston, Idaho. We introduce a modified two-step decision model that incorporates endogenous time value using a latent index variable approach. The focus is on the importance of distinguishing between short-run and long-run consumer decision variables in a consistent manner. A modified Barnett two-step decision model was found superior to other models tested.

  11. Super learning to hedge against incorrect inference from arbitrary parametric assumptions in marginal structural modeling.

    PubMed

    Neugebauer, Romain; Fireman, Bruce; Roy, Jason A; Raebel, Marsha A; Nichols, Gregory A; O'Connor, Patrick J

    2013-08-01

    Clinical trials are unlikely to ever be launched for many comparative effectiveness research (CER) questions. Inferences from hypothetical randomized trials may however be emulated with marginal structural modeling (MSM) using observational data, but success in adjusting for time-dependent confounding and selection bias typically relies on parametric modeling assumptions. If these assumptions are violated, inferences from MSM may be inaccurate. In this article, we motivate the application of a data-adaptive estimation approach called super learning (SL) to avoid reliance on arbitrary parametric assumptions in CER. Using the electronic health records data from adults with new-onset type 2 diabetes, we implemented MSM with inverse probability weighting (IPW) estimation to evaluate the effect of three oral antidiabetic therapies on the worsening of glomerular filtration rate. Inferences from IPW estimation were noticeably sensitive to the parametric assumptions about the associations between both the exposure and censoring processes and the main suspected source of confounding, that is, time-dependent measurements of hemoglobin A1c. SL was successfully implemented to harness flexible confounding and selection bias adjustment from existing machine learning algorithms. Erroneous IPW inference about clinical effectiveness because of arbitrary and incorrect modeling decisions may be avoided with SL. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Two's company, three (or more) is a simplex : Algebraic-topological tools for understanding higher-order structure in neural data.

    PubMed

    Giusti, Chad; Ghrist, Robert; Bassett, Danielle S

    2016-08-01

    The language of graph theory, or network science, has proven to be an exceptional tool for addressing myriad problems in neuroscience. Yet, the use of networks is predicated on a critical simplifying assumption: that the quintessential unit of interest in a brain is a dyad - two nodes (neurons or brain regions) connected by an edge. While rarely mentioned, this fundamental assumption inherently limits the types of neural structure and function that graphs can be used to model. Here, we describe a generalization of graphs that overcomes these limitations, thereby offering a broad range of new possibilities in terms of modeling and measuring neural phenomena. Specifically, we explore the use of simplicial complexes: a structure developed in the field of mathematics known as algebraic topology, of increasing applicability to real data due to a rapidly growing computational toolset. We review the underlying mathematical formalism as well as the budding literature applying simplicial complexes to neural data, from electrophysiological recordings in animal models to hemodynamic fluctuations in humans. Based on the exceptional flexibility of the tools and recent ground-breaking insights into neural function, we posit that this framework has the potential to eclipse graph theory in unraveling the fundamental mysteries of cognition.

  13. A fuzzy logic expert system for evaluating policy progress towards sustainability goals.

    PubMed

    Cisneros-Montemayor, Andrés M; Singh, Gerald G; Cheung, William W L

    2017-12-16

    Evaluating progress towards environmental sustainability goals can be difficult due to a lack of measurable benchmarks and insufficient or uncertain data. Marine settings are particularly challenging, as stakeholders and objectives tend to be less well defined and ecosystem components have high natural variability and are difficult to observe directly. Fuzzy logic expert systems are useful analytical frameworks to evaluate such systems, and we develop such a model here to formally evaluate progress towards sustainability targets based on diverse sets of indicators. Evaluation criteria include recent (since policy enactment) and historical (from earliest known state) change, type of indicators (state, benefit, pressure, response), time span and spatial scope, and the suitability of an indicator in reflecting progress toward a specific objective. A key aspect of the framework is that all assumptions are transparent and modifiable to fit different social and ecological contexts. We test the method by evaluating progress towards four Aichi Biodiversity Targets in Canadian oceans, including quantitative progress scores, information gaps, and the sensitivity of results to model and data assumptions. For Canadian marine systems, national protection plans and biodiversity awareness show good progress, but species and ecosystem states overall do not show strong improvement. Well-defined goals are vital for successful policy implementation, as ambiguity allows for conflicting potential indicators, which in natural systems increases uncertainty in progress evaluations. Importantly, our framework can be easily adapted to assess progress towards policy goals with different themes, globally or in specific regions.

  14. Preliminary Thermal Modeling of HI-STORM 100 Storage Modules at Diablo Canyon Power Plant ISFSI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cuta, Judith M.; Adkins, Harold E.

    Thermal analysis is being undertaken at Pacific Northwest National Laboratory (PNNL) in support of inspections of selected storage modules at various locations around the United States, as part of the Used Fuel Disposition Campaign of the U.S. Department of Energy, Office of Nuclear Energy (DOE-NE) Fuel Cycle Research and Development. This report documents pre-inspection predictions of temperatures for two modules at the Diablo Canyon Power Plant ISFSI identified as candidates for inspection. These are HI-STORM 100 modules of a site-specific design for storing PWR 17x17 fuel in MPC-32 canisters. The temperature predictions reported in this document were obtained with detailedmore » COBRA-SFS models of these storage systems, with the following boundary conditions and assumptions. • storage module overpack configuration based on FSAR documentation of HI-STORM100S-218, Version B; due to unavailability of site-specific design data for Diablo Canyon ISFSI modules • Individual assembly and total decay heat loadings for each canister, based on at-loading values provided by PG&E, “aged” to time of inspection using ORIGEN modeling o Special Note: there is an inherent conservatism of unquantified magnitude – informally estimated as up to approximately 20% -- in the utility-supplied values for at-loading assembly decay heat values • Axial decay heat distributions based on a bounding generic profile for PWR fuel. • Axial location of beginning of fuel assumed same as WE 17x17 OFA fuel, due to unavailability of specific data for WE17x17 STD and WE 17x17 Vantage 5 fuel designs • Ambient conditions of still air at 50°F (10°C) assumed for base-case evaluations o Wind conditions at the Diablo Canyon site are unquantified, due to unavailability of site meteorological data o additional still-air evaluations performed at 70°F (21°C), 60°F (16°C), and 40°F (4°C), to cover a range of possible conditions at the time of the inspection. (Calculations were also performed at 80°F (27°C), for comparison with design basis assumptions.) All calculations are for steady-state conditions, on the assumption that the surfaces of the module that are accessible for temperature measurements during the inspection will tend to follow ambient temperature changes relatively closely. Comparisons to the results of the inspections, and post-inspection evaluations of temperature measurements obtained in the specific modules, will be documented in a separate follow-on report, to be issued in a timely manner after the inspection has been performed.« less

  15. Approximate Bayesian computation in large-scale structure: constraining the galaxy-halo connection

    NASA Astrophysics Data System (ADS)

    Hahn, ChangHoon; Vakili, Mohammadjavad; Walsh, Kilian; Hearin, Andrew P.; Hogg, David W.; Campbell, Duncan

    2017-08-01

    Standard approaches to Bayesian parameter inference in large-scale structure assume a Gaussian functional form (chi-squared form) for the likelihood. This assumption, in detail, cannot be correct. Likelihood free inferences such as approximate Bayesian computation (ABC) relax these restrictions and make inference possible without making any assumptions on the likelihood. Instead ABC relies on a forward generative model of the data and a metric for measuring the distance between the model and data. In this work, we demonstrate that ABC is feasible for LSS parameter inference by using it to constrain parameters of the halo occupation distribution (HOD) model for populating dark matter haloes with galaxies. Using specific implementation of ABC supplemented with population Monte Carlo importance sampling, a generative forward model using HOD and a distance metric based on galaxy number density, two-point correlation function and galaxy group multiplicity function, we constrain the HOD parameters of mock observation generated from selected 'true' HOD parameters. The parameter constraints we obtain from ABC are consistent with the 'true' HOD parameters, demonstrating that ABC can be reliably used for parameter inference in LSS. Furthermore, we compare our ABC constraints to constraints we obtain using a pseudo-likelihood function of Gaussian form with MCMC and find consistent HOD parameter constraints. Ultimately, our results suggest that ABC can and should be applied in parameter inference for LSS analyses.

  16. Robust small area estimation of poverty indicators using M-quantile approach (Case study: Sub-district level in Bogor district)

    NASA Astrophysics Data System (ADS)

    Girinoto, Sadik, Kusman; Indahwati

    2017-03-01

    The National Socio-Economic Survey samples are designed to produce estimates of parameters of planned domains (provinces and districts). The estimation of unplanned domains (sub-districts and villages) has its limitation to obtain reliable direct estimates. One of the possible solutions to overcome this problem is employing small area estimation techniques. The popular choice of small area estimation is based on linear mixed models. However, such models need strong distributional assumptions and do not easy allow for outlier-robust estimation. As an alternative approach for this purpose, M-quantile regression approach to small area estimation based on modeling specific M-quantile coefficients of conditional distribution of study variable given auxiliary covariates. It obtained outlier-robust estimation from influence function of M-estimator type and also no need strong distributional assumptions. In this paper, the aim of study is to estimate the poverty indicator at sub-district level in Bogor District-West Java using M-quantile models for small area estimation. Using data taken from National Socioeconomic Survey and Villages Potential Statistics, the results provide a detailed description of pattern of incidence and intensity of poverty within Bogor district. We also compare the results with direct estimates. The results showed the framework may be preferable when direct estimate having no incidence of poverty at all in the small area.

  17. Analyzing recurrent events when the history of previous episodes is unknown or not taken into account: proceed with caution.

    PubMed

    Navarro, Albert; Casanovas, Georgina; Alvarado, Sergio; Moriña, David

    Researchers in public health are often interested in examining the effect of several exposures on the incidence of a recurrent event. The aim of the present study is to assess how well the common-baseline hazard models perform to estimate the effect of multiple exposures on the hazard of presenting an episode of a recurrent event, in presence of event dependence and when the history of prior-episodes is unknown or is not taken into account. Through a comprehensive simulation study, using specific-baseline hazard models as the reference, we evaluate the performance of common-baseline hazard models by means of several criteria: bias, mean squared error, coverage, confidence intervals mean length and compliance with the assumption of proportional hazards. Results indicate that the bias worsen as event dependence increases, leading to a considerable overestimation of the exposure effect; coverage levels and compliance with the proportional hazards assumption are low or extremely low, worsening with increasing event dependence, effects to be estimated, and sample sizes. Common-baseline hazard models cannot be recommended when we analyse recurrent events in the presence of event dependence. It is important to have access to the history of prior-episodes per subject, it can permit to obtain better estimations of the effects of the exposures. Copyright © 2016 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.

  18. On the Connection Between One-and Two-Equation Models of Turbulence

    NASA Technical Reports Server (NTRS)

    Menter, F. R.; Rai, Man Mohan (Technical Monitor)

    1994-01-01

    A formalism will be presented that allows the transformation of two-equation eddy viscosity turbulence models into one-equation models. The transformation is based on an assumption that is widely accepted over a large range of boundary layer flows and that has been shown to actually improve predictions when incorporated into two-equation models of turbulence. Based on that assumption, a new one-equation turbulence model will be derived. The new model will be tested in great detail against a previously introduced one-equation model and against its parent two-equation model.

  19. Ames interactive molecular model building system - A 3-D computer modelling system applied to the study of the origin of life

    NASA Technical Reports Server (NTRS)

    Coeckelenbergh, Y.; Macelroy, R. D.; Rein, R.

    1978-01-01

    The investigation of specific interactions among biological molecules must take into consideration the stereochemistry of the structures. Thus, models of the molecules are essential for describing the spatial organization of potentially interacting groups, and estimations of conformation are required for a description of spatial organization. Both the function of visualizing molecules, and that of estimating conformation through calculations of energy, are part of the molecular modeling system described in the present paper. The potential uses of the system in investigating some aspects of the origin of life rest on the assumption that translation of conformation from genetic elements to catalytic elements would have been required for the development of the first replicating systems subject to the process of biological evolution.

  20. Higher impact of female than male migration on population structure in large mammals.

    PubMed

    Tiedemann, R; Hardy, O; Vekemans, X; Milinkovitch, M C

    2000-08-01

    We simulated large mammal populations using an individual-based stochastic model under various sex-specific migration schemes and life history parameters from the blue whale and the Asian elephant. Our model predicts that genetic structure at nuclear loci is significantly more influenced by female than by male migration. We identified requisite comigration of mother and offspring during gravidity and lactation as the primary cause of this phenomenon. In addition, our model predicts that the common assumption that geographical patterns of mitochondrial DNA (mtDNA) could be translated into female migration rates (Nmf) will cause biased estimates of maternal gene flow when extensive male migration occurs and male mtDNA haplotypes are included in the analysis.

  1. A Model-Free Machine Learning Method for Risk Classification and Survival Probability Prediction.

    PubMed

    Geng, Yuan; Lu, Wenbin; Zhang, Hao Helen

    2014-01-01

    Risk classification and survival probability prediction are two major goals in survival data analysis since they play an important role in patients' risk stratification, long-term diagnosis, and treatment selection. In this article, we propose a new model-free machine learning framework for risk classification and survival probability prediction based on weighted support vector machines. The new procedure does not require any specific parametric or semiparametric model assumption on data, and is therefore capable of capturing nonlinear covariate effects. We use numerous simulation examples to demonstrate finite sample performance of the proposed method under various settings. Applications to a glioma tumor data and a breast cancer gene expression survival data are shown to illustrate the new methodology in real data analysis.

  2. Impact of baryonic physics on intrinsic alignments

    DOE PAGES

    Tenneti, Ananth; Gnedin, Nickolay Y.; Feng, Yu

    2017-01-11

    We explore the effects of specific assumptions in the subgrid models of star formation and stellar and AGN feedback on intrinsic alignments of galaxies in cosmological simulations of "MassiveBlack-II" family. Using smaller volume simulations, we explored the parameter space of the subgrid star formation and feedback model and found remarkable robustness of the observable statistical measures to the details of subgrid physics. The one observational probe most sensitive to modeling details is the distribution of misalignment angles. We hypothesize that the amount of angular momentum carried away by the galactic wind is the primary physical quantity that controls the orientationmore » of the stellar distribution. Finally, our results are also consistent with a similar study by the EAGLE simulation team.« less

  3. Foundations for estimation by the method of least squares

    NASA Technical Reports Server (NTRS)

    Hauck, W. W., Jr.

    1971-01-01

    Least squares estimation is discussed from the point of view of a statistician. Much of the emphasis is on problems encountered in application and, more specifically, on questions involving assumptions: what assumptions are needed, when are they needed, what happens if they are not valid, and if they are invalid, how that fact can be detected.

  4. Unintended Consequences or Testing the Integrity of Teachers and Students.

    ERIC Educational Resources Information Center

    Kimmel, Ernest W.

    Large-scale testing programs are generally based on the assumptions that the test-takers experience standard conditions for taking the test and that everyone will do his or her own work without having prior knowledge of specific questions. These assumptions are not necessarily true. The ways students and educators use to get around standardizing…

  5. A comprehensive literature review of haplotyping software and methods for use with unrelated individuals.

    PubMed

    Salem, Rany M; Wessel, Jennifer; Schork, Nicholas J

    2005-03-01

    Interest in the assignment and frequency analysis of haplotypes in samples of unrelated individuals has increased immeasurably as a result of the emphasis placed on haplotype analyses by, for example, the International HapMap Project and related initiatives. Although there are many available computer programs for haplotype analysis applicable to samples of unrelated individuals, many of these programs have limitations and/or very specific uses. In this paper, the key features of available haplotype analysis software for use with unrelated individuals, as well as pooled DNA samples from unrelated individuals, are summarised. Programs for haplotype analysis were identified through keyword searches on PUBMED and various internet search engines, a review of citations from retrieved papers and personal communications, up to June 2004. Priority was given to functioning computer programs, rather than theoretical models and methods. The available software was considered in light of a number of factors: the algorithm(s) used, algorithm accuracy, assumptions, the accommodation of genotyping error, implementation of hypothesis testing, handling of missing data, software characteristics and web-based implementations. Review papers comparing specific methods and programs are also summarised. Forty-six haplotyping programs were identified and reviewed. The programs were divided into two groups: those designed for individual genotype data (a total of 43 programs) and those designed for use with pooled DNA samples (a total of three programs). The accuracy of programs using various criteria are assessed and the programs are categorised and discussed in light of: algorithm and method, accuracy, assumptions, genotyping error, hypothesis testing, missing data, software characteristics and web implementation. Many available programs have limitations (eg some cannot accommodate missing data) and/or are designed with specific tasks in mind (eg estimating haplotype frequencies rather than assigning most likely haplotypes to individuals). It is concluded that the selection of an appropriate haplotyping program for analysis purposes should be guided by what is known about the accuracy of estimation, as well as by the limitations and assumptions built into a program.

  6. Analysis of dam-passage survival of yearling and subyearling Chinook salmon and juvenile steelhead at The Dalles Dam, Oregon, 2010

    USGS Publications Warehouse

    Beeman, John W.; Kock, Tobias J.; Perry, Russell W.; Smith, Steven G.

    2011-01-01

    We performed a series of analyses of mark-recapture data from a study at The Dalles Dam during 2010 to determine if model assumptions for estimation of juvenile salmonid dam-passage survival were met and if results were similar to those using the University of Washington's newly developed ATLAS software. The study was conducted by the Pacific Northwest National Laboratory and used acoustic telemetry of yearling Chinook salmon, juvenile steelhead, and subyearling Chinook salmon released at three sites according to the new virtual/paired-release statistical model. This was the first field application of the new model, and the results were used to measure compliance with minimum survival standards set forth in a recent Biological Opinion. Our analyses indicated that most model assumptions were met. The fish groups mixed in time and space, and no euthanized tagged fish were detected. Estimates of reach-specific survival were similar in fish tagged by each of the six taggers during the spring, but not in the summer. Tagger effort was unevenly allocated temporally during tagging of subyearling Chinook salmon in the summer; the difference in survival estimates among taggers was more likely a result of a temporal trend in actual survival than of tagger effects. The reach-specific survival of fish released at the three sites was not equal in the reaches they had in common for juvenile steelhead or subyearling Chinook salmon, violating one model assumption. This violation did not affect the estimate of dam-passage survival, because data from the common reaches were not used in its calculation. Contrary to expectation, precision of survival estimates was not improved by using the most parsimonious model of recapture probabilities instead of the fully parameterized model. Adjusting survival estimates for differences in fish travel times and tag lives increased the dam-passage survival estimate for yearling Chinook salmon by 0.0001 and for juvenile steelhead by 0.0004. The estimate was unchanged for subyearling Chinook salmon. The tag-life-adjusted dam-passage survival estimates from our analyses were 0.9641 (standard error [SE] 0.0096) for yearling Chinook salmon, 0.9534 (SE 0.0097) for juvenile steelhead, and 0.9404 (SE 0.0091) for subyearling Chinook salmon. These were within 0.0001 of estimates made by the University of Washington using the ATLAS software. Contrary to the intent of the virtual/paired-release model to adjust estimates of the paired-release model downward in order to account for differential handling mortality rates between release groups, random variation in survival estimates may result in an upward adjustment of survival relative to estimates from the paired-release model. Further investigation of this property of the virtual/paired-release model likely would prove beneficial. In addition, we suggest that differential selective pressures near release sites of the two control groups could bias estimates of dam-passage survival from the virtual/paired-release model.

  7. The effect of row structure on soil moisture retrieval accuracy from passive microwave data.

    PubMed

    Xingming, Zheng; Kai, Zhao; Yangyang, Li; Jianhua, Ren; Yanling, Ding

    2014-01-01

    Row structure causes the anisotropy of microwave brightness temperature (TB) of soil surface, and it also can affect soil moisture retrieval accuracy when its influence is ignored in the inversion model. To study the effect of typical row structure on the retrieved soil moisture and evaluate if there is a need to introduce this effect into the inversion model, two ground-based experiments were carried out in 2011. Based on the observed C-band TB, field soil and vegetation parameters, row structure rough surface assumption (Q p model and discrete model), including the effect of row structure, and flat rough surface assumption (Q p model), ignoring the effect of row structure, are used to model microwave TB of soil surface. Then, soil moisture can be retrieved, respectively, by minimizing the difference of the measured and modeled TB. The results show that soil moisture retrieval accuracy based on the row structure rough surface assumption is approximately 0.02 cm(3)/cm(3) better than the flat rough surface assumption for vegetated soil, as well as 0.015 cm(3)/cm(3) better for bare and wet soil. This result indicates that the effect of row structure cannot be ignored for accurately retrieving soil moisture of farmland surface when C-band is used.

  8. The MONGOOSE Rational Arithmetic Toolbox.

    PubMed

    Le, Christopher; Chindelevitch, Leonid

    2018-01-01

    The modeling of metabolic networks has seen a rapid expansion following the complete sequencing of thousands of genomes. The constraint-based modeling framework has emerged as one of the most popular approaches to reconstructing and analyzing genome-scale metabolic models. Its main assumption is that of a quasi-steady-state, requiring that the production of each internal metabolite be balanced by its consumption. However, due to the multiscale nature of the models, the large number of reactions and metabolites, and the use of floating-point arithmetic for the stoichiometric coefficients, ensuring that this assumption holds can be challenging.The MONGOOSE toolbox addresses this problem by using rational arithmetic, thus ensuring that models are analyzed in a reproducible manner and consistently with modeling assumptions. In this chapter we present a protocol for the complete analysis of a metabolic network model using the MONGOOSE toolbox, via its newly developed GUI, and describe how it can be used as a model-checking platform both during and after the model construction process.

  9. Notes from 1999 on computational algorithm of the Local Wave-Vector (LWV) model for the dynamical evolution of the second-rank velocity correlation tensor starting from the mean-flow-coupled Navier-Stokes equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zemach, Charles; Kurien, Susan

    These notes present an account of the Local Wave Vector (LWV) model of a turbulent flow defined throughout physical space. The previously-developed Local Wave Number (LWN) model is taken as a point of departure. Some general properties of turbulent fields and appropriate notation are given first. The LWV model is presently restricted to incompressible flows and the incompressibility assumption is introduced at an early point in the discussion. The assumption that the turbulence is homogeneous is also introduced early on. This assumption can be relaxed by generalizing the space diffusion terms of LWN, but the present discussion is focused onmore » a modeling of homogeneous turbulence.« less

  10. Relating centrality to impact parameter in nucleus-nucleus collisions

    NASA Astrophysics Data System (ADS)

    Das, Sruthy Jyothi; Giacalone, Giuliano; Monard, Pierre-Amaury; Ollitrault, Jean-Yves

    2018-01-01

    In ultrarelativistic heavy-ion experiments, one estimates the centrality of a collision by using a single observable, say n , typically given by the transverse energy or the number of tracks observed in a dedicated detector. The correlation between n and the impact parameter b of the collision is then inferred by fitting a specific model of the collision dynamics, such as the Glauber model, to experimental data. The goal of this paper is to assess precisely which information about b can be extracted from data without any specific model of the collision. Under the sole assumption that the probability distribution of n for a fixed b is Gaussian, we show that the probability distribution of the impact parameter in a narrow centrality bin can be accurately reconstructed up to 5 % centrality. We apply our methodology to data from the Relativistic Heavy Ion Collider and the Large Hadron Collider. We propose a simple measure of the precision of the centrality determination, which can be used to compare different experiments.

  11. Selective visual working memory in fear of spiders: the role of automaticity and material-specificity.

    PubMed

    Reinecke, Andrea; Becker, Eni S; Rinck, Mike

    2009-12-01

    Following cognitive models of anxiety, biases occur if threat processing is automatic versus strategic. Therefore, most of these models predict attentional bias, but not explicit memory bias. We suggest dividing memory into the highly automatic working memory (WM) component versus long-term memory when investigating bias in anxiety. WM for threat has rarely been investigated although its main function is stimulus monitoring, particularly important in anxiety. We investigated WM for spiders in spider fearfuls (SFs) versus non-anxious controls (NACs). In Experiment 1 (23 SFs/24 NACs), we replicated an earlier WM study, reducing strategic processing options. This led to stronger group differences and, thus, clearer WM threat biases. There were no group differences in Experiment 2 (18 SFs/19 NACs), using snakes instead of spiders to test whether WM biases are material-specific. This article supports cognitive models of anxiety in that biases are more likely to occur when reducing strategic processing. However, it contradicts the assumption that explicit memory biases are not characteristic of anxiety.

  12. On Maximizing Item Information and Matching Difficulty with Ability.

    ERIC Educational Resources Information Center

    Bickel, Peter; Buyske, Steven; Chang, Huahua; Ying, Zhiliang

    2001-01-01

    Examined the assumption that matching difficulty levels of test items with an examinee's ability makes a test more efficient and challenged this assumption through a class of one-parameter item response theory models. Found the validity of the fundamental assumption to be closely related to the van Zwet tail ordering of symmetric distributions (W.…

  13. Under What Assumptions Do Site-by-Treatment Instruments Identify Average Causal Effects?

    ERIC Educational Resources Information Center

    Reardon, Sean F.; Raudenbush, Stephen W.

    2011-01-01

    The purpose of this paper is to clarify the assumptions that must be met if this--multiple site, multiple mediator--strategy, hereafter referred to as "MSMM," is to identify the average causal effects (ATE) in the populations of interest. The authors' investigation of the assumptions of the multiple-mediator, multiple-site IV model demonstrates…

  14. Keeping Things Simple: Why the Human Development Index Should Not Diverge from Its Equal Weights Assumption

    ERIC Educational Resources Information Center

    Stapleton, Lee M.; Garrod, Guy D.

    2007-01-01

    Using a range of statistical criteria rooted in Information Theory we show that there is little justification for relaxing the equal weights assumption underlying the United Nation's Human Development Index (HDI) even if the true HDI diverges significantly from this assumption. Put differently, the additional model complexity that unequal weights…

  15. Bell Nonlocality, Signal Locality and Unpredictability (or What Bohr Could Have Told Einstein at Solvay Had He Known About Bell Experiments)

    NASA Astrophysics Data System (ADS)

    Cavalcanti, Eric G.; Wiseman, Howard M.

    2012-10-01

    The 1964 theorem of John Bell shows that no model that reproduces the predictions of quantum mechanics can simultaneously satisfy the assumptions of locality and determinism. On the other hand, the assumptions of signal locality plus predictability are also sufficient to derive Bell inequalities. This simple theorem, previously noted but published only relatively recently by Masanes, Acin and Gisin, has fundamental implications not entirely appreciated. Firstly, nothing can be concluded about the ontological assumptions of locality or determinism independently of each other—it is possible to reproduce quantum mechanics with deterministic models that violate locality as well as indeterministic models that satisfy locality. On the other hand, the operational assumption of signal locality is an empirically testable (and well-tested) consequence of relativity. Thus Bell inequality violations imply that we can trust that some events are fundamentally unpredictable, even if we cannot trust that they are indeterministic. This result grounds the quantum-mechanical prohibition of arbitrarily accurate predictions on the assumption of no superluminal signalling, regardless of any postulates of quantum mechanics. It also sheds a new light on an early stage of the historical debate between Einstein and Bohr.

  16. Lot quality assurance sampling for monitoring immunization programmes: cost-efficient or quick and dirty?

    PubMed

    Sandiford, P

    1993-09-01

    In recent years Lot quality assurance sampling (LQAS), a method derived from production-line industry, has been advocated as an efficient means to evaluate the coverage rates achieved by child immunization programmes. This paper examines the assumptions on which LQAS is based and the effect that these assumptions have on its utility as a management tool. It shows that the attractively low sample sizes used in LQAS are achieved at the expense of specificity unless unrealistic assumptions are made about the distribution of coverage rates amongst the immunization programmes to which the method is applied. Although it is a very sensitive test and its negative predictive value is probably high in most settings, its specificity and positive predictive value are likely to be low. The implications of these strengths and weaknesses with regard to management decision-making are discussed.

  17. Emissions Scenario Portal for Visualization of Low Carbon Pathways

    NASA Astrophysics Data System (ADS)

    Friedrich, J.; Hennig, R. J.; Mountford, H.; Altamirano, J. C.; Ge, M.; Fransen, T.

    2016-12-01

    This proposal for a presentation is centered around a new project which is developed collaboratively by the World Resources Institute (WRI), Google Inc., and Deep Decarbonization Pathways Project (DDPP). The project aims to develop an online, open portal, the Emissions Scenario Portal (ESP),to enable users to easily visualize a range of future greenhouse gas emission pathways linked to different scenarios of economic and energy developments, drawing from a variety of modeling tools. It is targeted to users who are not modelling experts, but instead policy analysts or advisors, investment analysts, and similar who draw on modelled scenarios to inform their work, and who can benefit from better access to, and transparency around, the wide range of emerging scenarios on ambitious climate action. The ESP will provide information from scenarios in a visually appealing and easy-to-understand manner that enable these users to recognize the opportunities to reduce GHG emissions, the implications of the different scenarios, and the underlying assumptions. To facilitate the application of the portal and tools in policy dialogues, a series of country-specific and potentially sector-specific workshops with key decision-makers and analysts, supported by relevant analysis, will be organized by the key partners and also in broader collaboration with others who might wish to convene relevant groups around the information. This project will provide opportunities for modelers to increase their outreach and visibility in the public space and to directly interact with key audiences of emissions scenarios, such as policy analysts and advisors. The information displayed on the portal will cover a wide range of indicators, sectors and important scenario characteristics such as macroeconomic information, emission factors and policy as well as technology assumptions in order to facilitate comparison. These indicators have been selected based on existing standards (such as the IIASA AR5 database, the Greenhouse Gas Protocol and accounting literature) and stakeholder consultations. Examples for use cases include: technical advisers for governments NGO/Civil Society advocates Investors and bankers Modelers and academics Business sustainability officers

  18. Probabilistic Fracture Mechanics Analysis of the Orbiter's LH2 Feedline Flowliner

    NASA Technical Reports Server (NTRS)

    Bonacuse, Peter J. (Technical Monitor); Hudak, Stephen J., Jr.; Huyse, Luc; Chell, Graham; Lee, Yi-Der; Riha, David S.; Thacker, Ben; McClung, Craig; Gardner, Brian; Leverant, Gerald R.; hide

    2005-01-01

    Work performed by Southwest Research Institute (SwRI) as part of an Independent Technical Assessment (ITA) for the NASA Engineering and Safety Center (NESC) is summarized. The ITA goal was to establish a flight rationale in light of a history of fatigue cracking due to flow induced vibrations in the feedline flowliners that supply liquid hydrogen to the space shuttle main engines. Prior deterministic analyses using worst-case assumptions predicted failure in a single flight. The current work formulated statistical models for dynamic loading and cryogenic fatigue crack growth properties, instead of using worst-case assumptions. Weight function solutions for bivariant stressing were developed to determine accurate crack "driving-forces". Monte Carlo simulations showed that low flowliner probabilities of failure (POF = 0.001 to 0.0001) are achievable, provided pre-flight inspections for cracks are performed with adequate probability of detection (POD)-specifically, 20/75 mils with 50%/99% POD. Measurements to confirm assumed POD curves are recommended. Since the computed POFs are very sensitive to the cyclic loads/stresses and the analysis of strain gage data revealed inconsistencies with the previous assumption of a single dominant vibrant mode, further work to reconcile this difference is recommended. It is possible that the unaccounted vibrational modes in the flight spectra could increase the computed POFs.

  19. Exploring Life Support Architectures for Evolution of Deep Space Human Exploration

    NASA Technical Reports Server (NTRS)

    Anderson, Molly S.; Stambaugh, Imelda C.

    2015-01-01

    Life support system architectures for long duration space missions are often explored analytically in the human spaceflight community to find optimum solutions for mass, performance, and reliability. But in reality, many other constraints can guide the design when the life support system is examined within the context of an overall vehicle, as well as specific programmatic goals and needs. Between the end of the Constellation program and the development of the "Evolvable Mars Campaign", NASA explored a broad range of mission possibilities. Most of these missions will never be implemented but the lessons learned during these concept development phases may color and guide future analytical studies and eventual life support system architectures. This paper discusses several iterations of design studies from the life support system perspective to examine which requirements and assumptions, programmatic needs, or interfaces drive design. When doing early concept studies, many assumptions have to be made about technology and operations. Data can be pulled from a variety of sources depending on the study needs, including parametric models, historical data, new technologies, and even predictive analysis. In the end, assumptions must be made in the face of uncertainty. Some of these may introduce more risk as to whether the solution for the conceptual design study will still work when designs mature and data becomes available.

  20. Dynamically rich, yet parameter-sparse models for spatial epidemiology. Comment on "Coupled disease-behavior dynamics on complex networks: A review" by Z. Wang et al.

    NASA Astrophysics Data System (ADS)

    Jusup, Marko; Iwami, Shingo; Podobnik, Boris; Stanley, H. Eugene

    2015-12-01

    Since the very inception of mathematical modeling in epidemiology, scientists exploited the simplicity ingrained in the assumption of a well-mixed population. For example, perhaps the earliest susceptible-infectious-recovered (SIR) model developed by L. Reed and W.H. Frost in the 1920s [1], included the well-mixed assumption such that any two individuals in the population could meet each other. The problem was that, unlike many other simplifying assumptions used in epidemiological modeling whose validity holds in one situation or the other, well-mixed populations are almost non-existent in reality because the nature of human socio-economic interactions is, for the most part, highly heterogeneous (e.g. [2-6]).

  1. Crystal plasticity modeling of β phase deformation in Ti-6Al-4V

    NASA Astrophysics Data System (ADS)

    Moore, John A.; Barton, Nathan R.; Florando, Jeff; Mulay, Rupalee; Kumar, Mukul

    2017-10-01

    Ti-6Al-4V is an alloy of titanium that dominates titanium usage in applications ranging from mass-produced consumer goods to high-end aerospace parts. The material’s structure on a microscale is known to affect its mechanical properties but these effects are not fully understood. Specifically, this work will address the effects of low volume fraction intergranular β phase on Ti-6Al-4V’s mechanical response during the transition from elastic to plastic deformation. A crystal plasticity-based finite element model is used to fully resolve the deformation of the β phase for the first time. This high fidelity model captures mechanisms difficult to access via experiments or lower fidelity models. The results are used to assess lower fidelity modeling assumptions and identify phenomena that have ramifications for failure of the material.

  2. Analysis and design of a capsule landing system and surface vehicle control system for Mars exporation

    NASA Technical Reports Server (NTRS)

    Frederick, D. K.; Lashmet, P. K.; Sandor, G. N.; Shen, C. N.; Smith, E. J.; Yerazunis, S. W.

    1972-01-01

    The problems related to the design and control of a mobile planetary vehicle to implement a systematic plan for the exploration of Mars were investigated. Problem areas receiving attention include: vehicle configuration, control, dynamics, systems and propulsion; systems analysis; navigation, terrain modeling and path selection; and chemical analysis of specimens. The following specific tasks were studied: vehicle model design, mathematical modeling of dynamic vehicle, experimental vehicle dynamics, obstacle negotiation, electromechanical controls, collapsibility and deployment, construction of a wheel tester, wheel analysis, payload design, system design optimization, effect of design assumptions, accessory optimal design, on-board computer subsystem, laser range measurement, discrete obstacle detection, obstacle detection systems, terrain modeling, path selection system simulation and evaluation, gas chromatograph/mass spectrometer system concepts, chromatograph model evaluation and improvement and transport parameter evaluation.

  3. Analysis and design of a capsule landing system and surface vehicle control system for Mars exploration

    NASA Technical Reports Server (NTRS)

    Frederick, D. K.; Lashmet, P. K.; Sandor, G. N.; Shen, C. N.; Smith, E. J.; Yerazunis, S. W.

    1972-01-01

    Investigation of problems related to the design and control of a mobile planetary vehicle to implement a systematic plan for the exploration of Mars has been undertaken. Problem areas receiving attention include: vehicle configuration, control, dynamics, systems and propulsion; systems analysis; terrain modeling and path selection; and chemical analysis of specimens. The following specific tasks have been under study: vehicle model design, mathematical modeling of a dynamic vehicle, experimental vehicle dynamics, obstacle negotiation, electromechanical controls, collapsibility and deployment, construction of a wheel tester, wheel analysis, payload design, system design optimization, effect of design assumptions, accessory optimal design, on-board computer sybsystem, laser range measurement, discrete obstacle detection, obstacle detection systems, terrain modeling, path selection system simulation and evaluation, gas chromatograph/mass spectrometer system concepts, chromatograph model evaluation and improvement.

  4. Logit-normal mixed model for Indian monsoon precipitation

    NASA Astrophysics Data System (ADS)

    Dietz, L. R.; Chatterjee, S.

    2014-09-01

    Describing the nature and variability of Indian monsoon precipitation is a topic of much debate in the current literature. We suggest the use of a generalized linear mixed model (GLMM), specifically, the logit-normal mixed model, to describe the underlying structure of this complex climatic event. Four GLMM algorithms are described and simulations are performed to vet these algorithms before applying them to the Indian precipitation data. The logit-normal model was applied to light, moderate, and extreme rainfall. Findings indicated that physical constructs were preserved by the models, and random effects were significant in many cases. We also found GLMM estimation methods were sensitive to tuning parameters and assumptions and therefore, recommend use of multiple methods in applications. This work provides a novel use of GLMM and promotes its addition to the gamut of tools for analysis in studying climate phenomena.

  5. Effect of Shear Deformation and Continuity on Delamination Modelling with Plate Elements

    NASA Technical Reports Server (NTRS)

    Glaessgen, E. H.; Riddell, W. T.; Raju, I. S.

    1998-01-01

    The effects of several critical assumptions and parameters on the computation of strain energy release rates for delamination and debond configurations modeled with plate elements have been quantified. The method of calculation is based on the virtual crack closure technique (VCCT), and models that model the upper and lower surface of the delamination or debond with two-dimensional (2D) plate elements rather than three-dimensional (3D) solid elements. The major advantages of the plate element modeling technique are a smaller model size and simpler geometric modeling. Specific issues that are discussed include: constraint of translational degrees of freedom, rotational degrees of freedom or both in the neighborhood of the crack tip; element order and assumed shear deformation; and continuity of material properties and section stiffness in the vicinity of the debond front, Where appropriate, the plate element analyses are compared with corresponding two-dimensional plane strain analyses.

  6. Customer-Provider Strategic Alignment: A Maturity Model

    NASA Astrophysics Data System (ADS)

    Luftman, Jerry; Brown, Carol V.; Balaji, S.

    This chapter presents a new model for assessing the maturity of a ­customer-provider relationship from a collaborative service delivery perspective: the Customer-Provider Strategic Alignment Maturity (CPSAM) Model. This model builds on recent research for effectively managing the customer-provider relationship in IT service outsourcing contexts and a validated model for assessing alignment across internal IT service units and their business customers within the same organization. After reviewing relevant literature by service science and information systems researchers, the six overarching components of the maturity model are presented: value measurements, governance, partnership, communications, human resources and skills, and scope and architecture. A key assumption of the model is that all of the components need be addressed to assess and improve customer-provider alignment. Examples of specific metrics for measuring the maturity level of each component over the five levels of maturity are also presented.

  7. Causal Models with Unmeasured Variables: An Introduction to LISREL.

    ERIC Educational Resources Information Center

    Wolfle, Lee M.

    Whenever one uses ordinary least squares regression, one is making an implicit assumption that all of the independent variables have been measured without error. Such an assumption is obviously unrealistic for most social data. One approach for estimating such regression models is to measure implied coefficients between latent variables for which…

  8. Assumptions of Asian American Similarity: The Case of Filipino and Chinese American Students

    ERIC Educational Resources Information Center

    Agbayani-Siewert, Pauline

    2004-01-01

    The conventional research model of clustering ethnic groups into four broad categories risks perpetuating a pedagogy of stereotypes in social work policies and practice methods. Using an elaborated research model, this study tested the assumption of cultural similarity of Filipino and Chinese American college students by examining attitudes,…

  9. An identifiable model for informative censoring

    USGS Publications Warehouse

    Link, W.A.; Wegman, E.J.; Gantz, D.T.; Miller, J.J.

    1988-01-01

    The usual model for censored survival analysis requires the assumption that censoring of observations arises only due to causes unrelated to the lifetime under consideration. It is easy to envision situations in which this assumption is unwarranted, and in which use of the Kaplan-Meier estimator and associated techniques will lead to unreliable analyses.

  10. The Discrepancy-Induced Source Comprehension (D-ISC) Model: Basic Assumptions and Preliminary Evidence

    ERIC Educational Resources Information Center

    Braasch, Jason L. G.; Bråten, Ivar

    2017-01-01

    Despite the importance of source attention and evaluation for learning from texts, little is known about the particular conditions that encourage sourcing during reading. In this article, basic assumptions of the discrepancy-induced source comprehension (D-ISC) model are presented, which describes the moment-by-moment cognitive processes that…

  11. Assume-Guarantee Verification of Source Code with Design-Level Assumptions

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Pasareanu, Corina S.; Cobleigh, Jamieson M.

    2004-01-01

    Model checking is an automated technique that can be used to determine whether a system satisfies certain required properties. To address the 'state explosion' problem associated with this technique, we propose to integrate assume-guarantee verification at different phases of system development. During design, developers build abstract behavioral models of the system components and use them to establish key properties of the system. To increase the scalability of model checking at this level, we have developed techniques that automatically decompose the verification task by generating component assumptions for the properties to hold. The design-level artifacts are subsequently used to guide the implementation of the system, but also to enable more efficient reasoning at the source code-level. In particular we propose to use design-level assumptions to similarly decompose the verification of the actual system implementation. We demonstrate our approach on a significant NASA application, where design-level models were used to identify; and correct a safety property violation, and design-level assumptions allowed us to check successfully that the property was presented by the implementation.

  12. Rational learning and information sampling: on the "naivety" assumption in sampling explanations of judgment biases.

    PubMed

    Le Mens, Gaël; Denrell, Jerker

    2011-04-01

    Recent research has argued that several well-known judgment biases may be due to biases in the available information sample rather than to biased information processing. Most of these sample-based explanations assume that decision makers are "naive": They are not aware of the biases in the available information sample and do not correct for them. Here, we show that this "naivety" assumption is not necessary. Systematically biased judgments can emerge even when decision makers process available information perfectly and are also aware of how the information sample has been generated. Specifically, we develop a rational analysis of Denrell's (2005) experience sampling model, and we prove that when information search is interested rather than disinterested, even rational information sampling and processing can give rise to systematic patterns of errors in judgments. Our results illustrate that a tendency to favor alternatives for which outcome information is more accessible can be consistent with rational behavior. The model offers a rational explanation for behaviors that had previously been attributed to cognitive and motivational biases, such as the in-group bias or the tendency to prefer popular alternatives. 2011 APA, all rights reserved

  13. Estimating the implied cost of carbon in future scenarios using a CGE model: The Case of Colorado

    DOE PAGES

    Hannum, Christopher; Cutler, Harvey; Iverson, Terrence; ...

    2017-01-07

    We develop a state-level computable general equilibrium (CGE) model that reflects the roles of coal, natural gas, wind, solar, and hydroelectricity in supplying electricity, using Colorado as a case study. Also, we focus on the economic impact of implementing Colorado's existing Renewable Portfolio Standard, updated in 2013. This requires that 25% of state generation come from qualifying renewable sources by 2020. We evaluate the policy under a variety of assumptions regarding wind integration costs and assumptions on the persistence of federal subsidies for wind. Specifically, we estimate the implied price of carbon as the carbon price at which a state-levelmore » policy would pass a state-level cost-benefit analysis, taking account of estimated greenhouse gas emission reductions and ancillary benefits from corresponding reductions in criteria pollutants. Our findings suggest that without the Production Tax Credit (federal aid), the state policy of mandating renewable power generation (RPS) is costly to state actors, with an implied cost of carbon of about $17 per ton of CO 2 with a 3% discount rate. Federal aid makes the decision between natural gas and wind nearly cost neutral for Colorado.« less

  14. Estimating the implied cost of carbon in future scenarios using a CGE model: The Case of Colorado

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hannum, Christopher; Cutler, Harvey; Iverson, Terrence

    We develop a state-level computable general equilibrium (CGE) model that reflects the roles of coal, natural gas, wind, solar, and hydroelectricity in supplying electricity, using Colorado as a case study. Also, we focus on the economic impact of implementing Colorado's existing Renewable Portfolio Standard, updated in 2013. This requires that 25% of state generation come from qualifying renewable sources by 2020. We evaluate the policy under a variety of assumptions regarding wind integration costs and assumptions on the persistence of federal subsidies for wind. Specifically, we estimate the implied price of carbon as the carbon price at which a state-levelmore » policy would pass a state-level cost-benefit analysis, taking account of estimated greenhouse gas emission reductions and ancillary benefits from corresponding reductions in criteria pollutants. Our findings suggest that without the Production Tax Credit (federal aid), the state policy of mandating renewable power generation (RPS) is costly to state actors, with an implied cost of carbon of about $17 per ton of CO 2 with a 3% discount rate. Federal aid makes the decision between natural gas and wind nearly cost neutral for Colorado.« less

  15. A theoretical approach to artificial intelligence systems in medicine.

    PubMed

    Spyropoulos, B; Papagounos, G

    1995-10-01

    The various theoretical models of disease, the nosology which is accepted by the medical community and the prevalent logic of diagnosis determine both the medical approach as well as the development of the relevant technology including the structure and function of the A.I. systems involved. A.I. systems in medicine, in addition to the specific parameters which enable them to reach a diagnostic and/or therapeutic proposal, entail implicitly theoretical assumptions and socio-cultural attitudes which prejudice the orientation and the final outcome of the procedure. The various models -causal, probabilistic, case-based etc. -are critically examined and their ethical and methodological limitations are brought to light. The lack of a self-consistent theoretical framework in medicine, the multi-faceted character of the human organism as well as the non-explicit nature of the theoretical assumptions involved in A.I. systems restrict them to the role of decision supporting "instruments" rather than regarding them as decision making "devices". This supporting role and, especially, the important function which A.I. systems should have in the structure, the methods and the content of medical education underscore the need of further research in the theoretical aspects and the actual development of such systems.

  16. Charge and pairing dynamics in the attractive Hubbard model: Mode coupling and the validity of linear-response theory

    NASA Astrophysics Data System (ADS)

    Bünemann, Jörg; Seibold, Götz

    2017-12-01

    Pump-probe experiments have turned out as a powerful tool in order to study the dynamics of competing orders in a large variety of materials. The corresponding analysis of the data often relies on standard linear-response theory generalized to nonequilibrium situations. Here we examine the validity of such an approach for the charge and pairing response of systems with charge-density wave and (or) superconducting (SC) order. Our investigations are based on the attractive Hubbard model which we study within the time-dependent Hartree-Fock approximation. In particular, we calculate the quench and pump-probe dynamics for SC and charge order parameters in order to analyze the frequency spectra and the coupling of the probe field to the specific excitations. Our calculations reveal that the "linear-response assumption" is justified for small to moderate nonequilibrium situations (i.e., pump pulses) in the case of a purely charge-ordered ground state. However, the pump-probe dynamics on top of a superconducting ground state is determined by phase and amplitude modes which get coupled far from the equilibrium state indicating the failure of the linear-response assumption.

  17. A single-degree-of-freedom model for non-linear soil amplification

    USGS Publications Warehouse

    Erdik, Mustafa Ozder

    1979-01-01

    For proper understanding of soil behavior during earthquakes and assessment of a realistic surface motion, studies of the large-strain dynamic response of non-linear hysteretic soil systems are indispensable. Most of the presently available studies are based on the assumption that the response of a soil deposit is mainly due to the upward propagation of horizontally polarized shear waves from the underlying bedrock. Equivalent-linear procedures, currently in common use in non-linear soil response analysis, provide a simple approach and have been favorably compared with the actual recorded motions in some particular cases. Strain compatibility in these equivalent-linear approaches is maintained by selecting values of shear moduli and damping ratios in accordance with the average soil strains, in an iterative manner. Truly non-linear constitutive models with complete strain compatibility have also been employed. The equivalent-linear approaches often raise some doubt as to the reliability of their results concerning the system response in high frequency regions. In these frequency regions the equivalent-linear methods may underestimate the surface motion by as much as a factor of two or more. Although studies are complete in their methods of analysis, they inevitably provide applications pertaining only to a few specific soil systems, and do not lead to general conclusions about soil behavior. This report attempts to provide a general picture of the soil response through the use of a single-degree-of-freedom non-linear-hysteretic model. Although the investigation is based on a specific type of nonlinearity and a set of dynamic soil properties, the method described does not limit itself to these assumptions and is equally applicable to other types of nonlinearity and soil parameters.

  18. Population-level differences in disease transmission: A Bayesian analysis of multiple smallpox epidemics

    PubMed Central

    Elderd, Bret D.; Dwyer, Greg; Dukic, Vanja

    2013-01-01

    Estimates of a disease’s basic reproductive rate R0 play a central role in understanding outbreaks and planning intervention strategies. In many calculations of R0, a simplifying assumption is that different host populations have effectively identical transmission rates. This assumption can lead to an underestimate of the overall uncertainty associated with R0, which, due to the non-linearity of epidemic processes, may result in a mis-estimate of epidemic intensity and miscalculated expenditures associated with public-health interventions. In this paper, we utilize a Bayesian method for quantifying the overall uncertainty arising from differences in population-specific basic reproductive rates. Using this method, we fit spatial and non-spatial susceptible-exposed-infected-recovered (SEIR) models to a series of 13 smallpox outbreaks. Five outbreaks occurred in populations that had been previously exposed to smallpox, while the remaining eight occurred in Native-American populations that were naïve to the disease at the time. The Native-American outbreaks were close in a spatial and temporal sense. Using Bayesian Information Criterion (BIC), we show that the best model includes population-specific R0 values. These differences in R0 values may, in part, be due to differences in genetic background, social structure, or food and water availability. As a result of these inter-population differences, the overall uncertainty associated with the “population average” value of smallpox R0 is larger, a finding that can have important consequences for controlling epidemics. In general, Bayesian hierarchical models are able to properly account for the uncertainty associated with multiple epidemics, provide a clearer understanding of variability in epidemic dynamics, and yield a better assessment of the range of potential risks and consequences that decision makers face. PMID:24021521

  19. Optimal trajectories for an aerospace plane. Part 1: Formulation, results, and analysis

    NASA Technical Reports Server (NTRS)

    Miele, Angelo; Lee, W. Y.; Wu, G. D.

    1990-01-01

    The optimization of the trajectories of an aerospace plane is discussed. This is a hypervelocity vehicle capable of achieving orbital speed, while taking off horizontally. The vehicle is propelled by four types of engines: turbojet engines for flight at subsonic speeds/low supersonic speeds; ramjet engines for flight at moderate supersonic speeds/low hypersonic speeds; scramjet engines for flight at hypersonic speeds; and rocket engines for flight at near-orbital speeds. A single-stage-to-orbit (SSTO) configuration is considered, and the transition from low supersonic speeds to orbital speeds is studied under the following assumptions: the turbojet portion of the trajectory has been completed; the aerospace plane is controlled via the angle of attack and the power setting; the aerodynamic model is the generic hypersonic aerodynamics model example (GHAME). Concerning the engine model, three options are considered: (EM1), a ramjet/scramjet combination in which the scramjet specific impulse tends to a nearly-constant value at large Mach numbers; (EM2), a ramjet/scramjet combination in which the scramjet specific impulse decreases monotonically at large Mach numbers; and (EM3), a ramjet/scramjet/rocket combination in which, owing to stagnation temperature limitations, the scramjet operates only at M approx. less than 15; at higher Mach numbers, the scramjet is shut off and the aerospace plane is driven only by the rocket engines. Under the above assumptions, four optimization problems are solved using the sequential gradient-restoration algorithm for optimal control problems: (P1) minimization of the weight of fuel consumed; (P2) minimization of the peak dynamic pressure; (P3) minimization of the peak heating rate; and (P4) minimization of the peak tangential acceleration.

  20. The influence of computational assumptions on analysing abdominal aortic aneurysm haemodynamics.

    PubMed

    Ene, Florentina; Delassus, Patrick; Morris, Liam

    2014-08-01

    The variation in computational assumptions for analysing abdominal aortic aneurysm haemodynamics can influence the desired output results and computational cost. Such assumptions for abdominal aortic aneurysm modelling include static/transient pressures, steady/transient flows and rigid/compliant walls. Six computational methods and these various assumptions were simulated and compared within a realistic abdominal aortic aneurysm model with and without intraluminal thrombus. A full transient fluid-structure interaction was required to analyse the flow patterns within the compliant abdominal aortic aneurysms models. Rigid wall computational fluid dynamics overestimates the velocity magnitude by as much as 40%-65% and the wall shear stress by 30%-50%. These differences were attributed to the deforming walls which reduced the outlet volumetric flow rate for the transient fluid-structure interaction during the majority of the systolic phase. Static finite element analysis accurately approximates the deformations and von Mises stresses when compared with transient fluid-structure interaction. Simplifying the modelling complexity reduces the computational cost significantly. In conclusion, the deformation and von Mises stress can be approximately found by static finite element analysis, while for compliant models a full transient fluid-structure interaction analysis is required for acquiring the fluid flow phenomenon. © IMechE 2014.

  1. Incorporation of memory effects in coarse-grained modeling via the Mori-Zwanzig formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Zhen; Bian, Xin; Karniadakis, George Em, E-mail: george-karniadakis@brown.edu

    2015-12-28

    The Mori-Zwanzig formalism for coarse-graining a complex dynamical system typically introduces memory effects. The Markovian assumption of delta-correlated fluctuating forces is often employed to simplify the formulation of coarse-grained (CG) models and numerical implementations. However, when the time scales of a system are not clearly separated, the memory effects become strong and the Markovian assumption becomes inaccurate. To this end, we incorporate memory effects into CG modeling by preserving non-Markovian interactions between CG variables, and the memory kernel is evaluated directly from microscopic dynamics. For a specific example, molecular dynamics (MD) simulations of star polymer melts are performed while themore » corresponding CG system is defined by grouping many bonded atoms into single clusters. Then, the effective interactions between CG clusters as well as the memory kernel are obtained from the MD simulations. The constructed CG force field with a memory kernel leads to a non-Markovian dissipative particle dynamics (NM-DPD). Quantitative comparisons between the CG models with Markovian and non-Markovian approximations indicate that including the memory effects using NM-DPD yields similar results as the Markovian-based DPD if the system has clear time scale separation. However, for systems with small separation of time scales, NM-DPD can reproduce correct short-time properties that are related to how the system responds to high-frequency disturbances, which cannot be captured by the Markovian-based DPD model.« less

  2. Why is metal bioaccumulation so variable? Biodynamics as a unifying concept

    USGS Publications Warehouse

    Luoma, Samuel N.; Rainbow, Philip S.

    2005-01-01

    Ecological risks from metal contaminants are difficult to document because responses differ among species, threats differ among metals, and environmental influences are complex. Unifying concepts are needed to better tie together such complexities. Here we suggest that a biologically based conceptualization, the biodynamic model, provides the necessary unification for a key aspect in risk:  metal bioaccumulation (internal exposure). The model is mechanistically based, but empirically considers geochemical influences, biological differences, and differences among metals. Forecasts from the model agree closely with observations from nature, validating its basic assumptions. The biodynamic metal bioaccumulation model combines targeted, high-quality geochemical analyses from a site of interest with parametrization of key physiological constants for a species from that site. The physiological parameters include metal influx rates from water, influx rates from food, rate constants of loss, and growth rates (when high). We compiled results from 15 publications that forecast species-specific bioaccumulation, and compare the forecasts to bioaccumulation data from the field. These data consider concentrations that cover 7 orders of magnitude. They include 7 metals and 14 species of animals from 3 phyla and 11 marine, estuarine, and freshwater environments. The coefficient of determination (R2) between forecasts and independently observed bioaccumulation from the field was 0.98. Most forecasts agreed with observations within 2-fold. The agreement suggests that the basic assumptions of the biodynamic model are tenable. A unified explanation of metal bioaccumulation sets the stage for a realistic understanding of toxicity and ecological effects of metals in nature.

  3. Simple wealth distribution model causing inequality-induced crisis without external shocks

    NASA Astrophysics Data System (ADS)

    Benisty, Henri

    2017-05-01

    We address the issue of the dynamics of wealth accumulation and economic crisis triggered by extreme inequality, attempting to stick to most possibly intrinsic assumptions. Our general framework is that of pure or modified multiplicative processes, basically geometric Brownian motions. In contrast with the usual approach of injecting into such stochastic agent models either specific, idiosyncratic internal nonlinear interaction patterns or macroscopic disruptive features, we propose a dynamic inequality model where the attainment of a sizable fraction of the total wealth by very few agents induces a crisis regime with strong intermittency, the explicit coupling between the richest and the rest being a mere normalization mechanism, hence with minimal extrinsic assumptions. The model thus harnesses the recognized lack of ergodicity of geometric Brownian motions. It also provides a statistical intuition to the consequences of Thomas Piketty's recent "r >g " (return rate > growth rate) paradigmatic analysis of very-long-term wealth trends. We suggest that the "water-divide" of wealth flow may define effective classes, making an objective entry point to calibrate the model. Consistently, we check that a tax mechanism associated to a few percent relative bias on elementary daily transactions is able to slow or stop the build-up of large wealth. When extreme fluctuations are tamed down to a stationary regime with sizable but steadier inequalities, it should still offer opportunities to study the dynamics of crisis and the inner effective classes induced through external or internal factors.

  4. Volcanic Plume Heights on Mars: Limits of Validity for Convective Models

    NASA Technical Reports Server (NTRS)

    Glaze, Lori S.; Baloga, Stephen M.

    2002-01-01

    Previous studies have overestimated volcanic plume heights on Mars. In this work, we demonstrate that volcanic plume rise models, as currently formulated, have only limited validity in any environment. These limits are easily violated in the current Mars environment and may also be violated for terrestrial and early Mars conditions. We indicate some of the shortcomings of the model with emphasis on the limited applicability to current Mars conditions. Specifically, basic model assumptions are violated when (1) vertical velocities exceed the speed of sound, (2) radial expansion rates exceed the speed of sound, (3) radial expansion rates approach or exceed the vertical velocity, or (4) plume radius grossly exceeds plume height. All of these criteria are violated for the typical Mars example given here. Solutions imply that the convective rise, model is only valid to a height of approximately 10 kilometers. The reason for the model breakdown is hat the current Mars atmosphere is not of sufficient density to satisfy the conservation equations. It is likely that diffusion and other effects governed by higher-order differential equations are important within the first few kilometers of rise. When the same criteria are applied to eruptions into a higher-density early Mars atmosphere, we find that eruption rates higher than 1.4 x 10(exp 9) kilograms per second also violate model assumptions. This implies a maximum extent of approximately 65 kilometers for convective plumes on early Mars. The estimated plume heights for both current and early Mars are significantly lower than those previously predicted in the literature. Therefore, global-scale distribution of ash seems implausible.

  5. High Altitude Venus Operations Concept Trajectory Design, Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Lugo, Rafael A.; Ozoroski, Thomas A.; Van Norman, John W.; Arney, Dale C.; Dec, John A.; Jones, Christopher A.; Zumwalt, Carlie H.

    2015-01-01

    A trajectory design and analysis that describes aerocapture, entry, descent, and inflation of manned and unmanned High Altitude Venus Operation Concept (HAVOC) lighter-than-air missions is presented. Mission motivation, concept of operations, and notional entry vehicle designs are presented. The initial trajectory design space is analyzed and discussed before investigating specific trajectories that are deemed representative of a feasible Venus mission. Under the project assumptions, while the high-mass crewed mission will require further research into aerodynamic decelerator technology, it was determined that the unmanned robotic mission is feasible using current technology.

  6. How Many Ch-Class NEOs Do We Expect?

    NASA Astrophysics Data System (ADS)

    Rivkin, A. S.; DeMeo, F. E.

    2017-09-01

    The Ch spectral class is thought to contain objects that have water in their minerals, and they are of great interest to scientists and the nascent asteroid mining industry. We use models of asteroid delivery to near-Earth space and measurements of the different compositions of asteroids to estimate there should be at least 20 Ch asteroids larger than 100 m that are more accessible than the Moon, though we note that there are some untested assumptions that lead to that number. Further work must be done to identify the specific Ch asteroids.

  7. Robust, Adaptive Functional Regression in Functional Mixed Model Framework.

    PubMed

    Zhu, Hongxiao; Brown, Philip J; Morris, Jeffrey S

    2011-09-01

    Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets.

  8. Robust, Adaptive Functional Regression in Functional Mixed Model Framework

    PubMed Central

    Zhu, Hongxiao; Brown, Philip J.; Morris, Jeffrey S.

    2012-01-01

    Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets. PMID:22308015

  9. Artifacts, assumptions, and ambiguity: Pitfalls in comparing experimental results to numerical simulations when studying electrical stimulation of the heart.

    PubMed

    Roth, Bradley J.

    2002-09-01

    Insidious experimental artifacts and invalid theoretical assumptions complicate the comparison of numerical predictions and observed data. Such difficulties are particularly troublesome when studying electrical stimulation of the heart. During unipolar stimulation of cardiac tissue, the artifacts include nonlinearity of membrane dyes, optical signals blocked by the stimulating electrode, averaging of optical signals with depth, lateral averaging of optical signals, limitations of the current source, and the use of excitation-contraction uncouplers. The assumptions involve electroporation, membrane models, electrode size, the perfusing bath, incorrect model parameters, the applicability of a continuum model, and tissue damage. Comparisons of theory and experiment during far-field stimulation are limited by many of these same factors, plus artifacts from plunge and epicardial recording electrodes and assumptions about the fiber angle at an insulating boundary. These pitfalls must be overcome in order to understand quantitatively how the heart responds to an electrical stimulus. (c) 2002 American Institute of Physics.

  10. Sensitivity of TRIM projections to management, harvest, yield, and stocking adjustment assumptions.

    Treesearch

    Susan J. Alexander

    1991-01-01

    The Timber Resource Inventory Model (TRIM) was used to make several projections of forest industry timber supply for the Douglas-fir region. The sensitivity of these projections to assumptions about management and yields is discussed. A base run is compared to runs in which yields were altered, stocking adjustment was eliminated, harvest assumptions were changed, and...

  11. Multiple imputation for handling missing outcome data when estimating the relative risk.

    PubMed

    Sullivan, Thomas R; Lee, Katherine J; Ryan, Philip; Salter, Amy B

    2017-09-06

    Multiple imputation is a popular approach to handling missing data in medical research, yet little is known about its applicability for estimating the relative risk. Standard methods for imputing incomplete binary outcomes involve logistic regression or an assumption of multivariate normality, whereas relative risks are typically estimated using log binomial models. It is unclear whether misspecification of the imputation model in this setting could lead to biased parameter estimates. Using simulated data, we evaluated the performance of multiple imputation for handling missing data prior to estimating adjusted relative risks from a correctly specified multivariable log binomial model. We considered an arbitrary pattern of missing data in both outcome and exposure variables, with missing data induced under missing at random mechanisms. Focusing on standard model-based methods of multiple imputation, missing data were imputed using multivariate normal imputation or fully conditional specification with a logistic imputation model for the outcome. Multivariate normal imputation performed poorly in the simulation study, consistently producing estimates of the relative risk that were biased towards the null. Despite outperforming multivariate normal imputation, fully conditional specification also produced somewhat biased estimates, with greater bias observed for higher outcome prevalences and larger relative risks. Deleting imputed outcomes from analysis datasets did not improve the performance of fully conditional specification. Both multivariate normal imputation and fully conditional specification produced biased estimates of the relative risk, presumably since both use a misspecified imputation model. Based on simulation results, we recommend researchers use fully conditional specification rather than multivariate normal imputation and retain imputed outcomes in the analysis when estimating relative risks. However fully conditional specification is not without its shortcomings, and so further research is needed to identify optimal approaches for relative risk estimation within the multiple imputation framework.

  12. Economic evaluation in chronic pain: a systematic review and de novo flexible economic model.

    PubMed

    Sullivan, W; Hirst, M; Beard, S; Gladwell, D; Fagnani, F; López Bastida, J; Phillips, C; Dunlop, W C N

    2016-07-01

    There is unmet need in patients suffering from chronic pain, yet innovation may be impeded by the difficulty of justifying economic value in a field beset by data limitations and methodological variability. A systematic review was conducted to identify and summarise the key areas of variability and limitations in modelling approaches in the economic evaluation of treatments for chronic pain. The results of the literature review were then used to support the development of a fully flexible open-source economic model structure, designed to test structural and data assumptions and act as a reference for future modelling practice. The key model design themes identified from the systematic review included: time horizon; titration and stabilisation; number of treatment lines; choice/ordering of treatment; and the impact of parameter uncertainty (given reliance on expert opinion). Exploratory analyses using the model to compare a hypothetical novel therapy versus morphine as first-line treatments showed cost-effectiveness results to be sensitive to structural and data assumptions. Assumptions about the treatment pathway and choice of time horizon were key model drivers. Our results suggest structural model design and data assumptions may have driven previous cost-effectiveness results and ultimately decisions based on economic value. We therefore conclude that it is vital that future economic models in chronic pain are designed to be fully transparent and hope our open-source code is useful in order to aspire to a common approach to modelling pain that includes robust sensitivity analyses to test structural and parameter uncertainty.

  13. Two models for evaluating landslide hazards

    USGS Publications Warehouse

    Davis, J.C.; Chung, C.-J.; Ohlmacher, G.C.

    2006-01-01

    Two alternative procedures for estimating landslide hazards were evaluated using data on topographic digital elevation models (DEMs) and bedrock lithologies in an area adjacent to the Missouri River in Atchison County, Kansas, USA. The two procedures are based on the likelihood ratio model but utilize different assumptions. The empirical likelihood ratio model is based on non-parametric empirical univariate frequency distribution functions under an assumption of conditional independence while the multivariate logistic discriminant model assumes that likelihood ratios can be expressed in terms of logistic functions. The relative hazards of occurrence of landslides were estimated by an empirical likelihood ratio model and by multivariate logistic discriminant analysis. Predictor variables consisted of grids containing topographic elevations, slope angles, and slope aspects calculated from a 30-m DEM. An integer grid of coded bedrock lithologies taken from digitized geologic maps was also used as a predictor variable. Both statistical models yield relative estimates in the form of the proportion of total map area predicted to already contain or to be the site of future landslides. The stabilities of estimates were checked by cross-validation of results from random subsamples, using each of the two procedures. Cell-by-cell comparisons of hazard maps made by the two models show that the two sets of estimates are virtually identical. This suggests that the empirical likelihood ratio and the logistic discriminant analysis models are robust with respect to the conditional independent assumption and the logistic function assumption, respectively, and that either model can be used successfully to evaluate landslide hazards. ?? 2006.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clausen, Jonathan R.; Brunini, Victor E.; Moffat, Harry K.

    We develop a capability to simulate reduction-oxidation (redox) flow batteries in the Sierra Multi-Mechanics code base. Specifically, we focus on all-vanadium redox flow batteries; however, the capability is general in implementation and could be adopted to other chemistries. The electrochemical and porous flow models follow those developed in the recent publication by [28]. We review the model implemented in this work and its assumptions, and we show several verification cases including a binary electrolyte, and a battery half-cell. Then, we compare our model implementation with the experimental results shown in [28], with good agreement seen. Next, a sensitivity study ismore » conducted for the major model parameters, which is beneficial in targeting specific features of the redox flow cell for improvement. Lastly, we simulate a three-dimensional version of the flow cell to determine the impact of plenum channels on the performance of the cell. Such channels are frequently seen in experimental designs where the current collector plates are borrowed from fuel cell designs. These designs use a serpentine channel etched into a solid collector plate.« less

  15. Optimal post-experiment estimation of poorly modeled dynamic systems

    NASA Technical Reports Server (NTRS)

    Mook, D. Joseph

    1988-01-01

    Recently, a novel strategy for post-experiment state estimation of discretely-measured dynamic systems has been developed. The method accounts for errors in the system dynamic model equations in a more general and rigorous manner than do filter-smoother algorithms. The dynamic model error terms do not require the usual process noise assumptions of zero-mean, symmetrically distributed random disturbances. Instead, the model error terms require no prior assumptions other than piecewise continuity. The resulting state estimates are more accurate than filters for applications in which the dynamic model error clearly violates the typical process noise assumptions, and the available measurements are sparse and/or noisy. Estimates of the dynamic model error, in addition to the states, are obtained as part of the solution of a two-point boundary value problem, and may be exploited for numerous reasons. In this paper, the basic technique is explained, and several example applications are given. Included among the examples are both state estimation and exploitation of the model error estimates.

  16. Droplets size evolution of dispersion in a stirred tank

    NASA Astrophysics Data System (ADS)

    Kysela, Bohus; Konfrst, Jiri; Chara, Zdenek; Sulc, Radek; Jasikova, Darina

    2018-06-01

    Dispersion of two immiscible liquids is commonly used in chemical industry as wall as in metallurgical industry e. g. extraction process. The governing property is droplet size distribution. The droplet sizes are given by the physical properties of both liquids and flow properties inside a stirred tank. The first investigation stage is focused on in-situ droplet size measurement using image analysis and optimizing of the evaluation method to achieve maximal result reproducibility. The obtained experimental results are compared with multiphase flow simulation based on Euler-Euler approach combined with PBM (Population Balance Modelling). The population balance model was, in that specific case, simplified with assumption of pure breakage of droplets.

  17. Towards the quantitative evaluation of visual attention models.

    PubMed

    Bylinskii, Z; DeGennaro, E M; Rajalingham, R; Ruda, H; Zhang, J; Tsotsos, J K

    2015-11-01

    Scores of visual attention models have been developed over the past several decades of research. Differences in implementation, assumptions, and evaluations have made comparison of these models very difficult. Taxonomies have been constructed in an attempt at the organization and classification of models, but are not sufficient at quantifying which classes of models are most capable of explaining available data. At the same time, a multitude of physiological and behavioral findings have been published, measuring various aspects of human and non-human primate visual attention. All of these elements highlight the need to integrate the computational models with the data by (1) operationalizing the definitions of visual attention tasks and (2) designing benchmark datasets to measure success on specific tasks, under these definitions. In this paper, we provide some examples of operationalizing and benchmarking different visual attention tasks, along with the relevant design considerations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Thermal Pollution Mathematical Model. Volume 3: User's Manual for One-Dimensional Numerical Model for the Seasonal Thermocline. [environment impact of thermal discharges from power plants

    NASA Technical Reports Server (NTRS)

    Lee, S. S.; Sengupta, S.; Nwadike, E. V.

    1980-01-01

    A user's manual for a one dimensional thermal model to predict the temperature profiles of a deep body of water for any number of annual cycles is presented. The model is essentially a set of partial differential equations which are solved by finite difference methods using a high speed digital computer. The model features the effects of area change with depth, nonlinear interaction of wind generated turbulence and buoyancy, adsorption of radiative heat flux below the surface, thermal discharges, and the effects of vertical convection caused by discharge. The main assumption in the formulation is horizontal homogeneity. The environmental impact of thermal discharges from power plants is emphasized. Although the model is applicable to most lakes, a specific site (Lake Keowee, S.C.) application is described in detail. The programs are written in FORTRAN 5.

  19. Differential Contribution of Low- and High-level Image Content to Eye Movements in Monkeys and Humans.

    PubMed

    Wilming, Niklas; Kietzmann, Tim C; Jutras, Megan; Xue, Cheng; Treue, Stefan; Buffalo, Elizabeth A; König, Peter

    2017-01-01

    Oculomotor selection exerts a fundamental impact on our experience of the environment. To better understand the underlying principles, researchers typically rely on behavioral data from humans, and electrophysiological recordings in macaque monkeys. This approach rests on the assumption that the same selection processes are at play in both species. To test this assumption, we compared the viewing behavior of 106 humans and 11 macaques in an unconstrained free-viewing task. Our data-driven clustering analyses revealed distinct human and macaque clusters, indicating species-specific selection strategies. Yet, cross-species predictions were found to be above chance, indicating some level of shared behavior. Analyses relying on computational models of visual saliency indicate that such cross-species commonalities in free viewing are largely due to similar low-level selection mechanisms, with only a small contribution by shared higher level selection mechanisms and with consistent viewing behavior of monkeys being a subset of the consistent viewing behavior of humans. © The Author 2017. Published by Oxford University Press.

  20. Differential Contribution of Low- and High-level Image Content to Eye Movements in Monkeys and Humans

    PubMed Central

    Wilming, Niklas; Kietzmann, Tim C.; Jutras, Megan; Xue, Cheng; Treue, Stefan; Buffalo, Elizabeth A.; König, Peter

    2017-01-01

    Abstract Oculomotor selection exerts a fundamental impact on our experience of the environment. To better understand the underlying principles, researchers typically rely on behavioral data from humans, and electrophysiological recordings in macaque monkeys. This approach rests on the assumption that the same selection processes are at play in both species. To test this assumption, we compared the viewing behavior of 106 humans and 11 macaques in an unconstrained free-viewing task. Our data-driven clustering analyses revealed distinct human and macaque clusters, indicating species-specific selection strategies. Yet, cross-species predictions were found to be above chance, indicating some level of shared behavior. Analyses relying on computational models of visual saliency indicate that such cross-species commonalities in free viewing are largely due to similar low-level selection mechanisms, with only a small contribution by shared higher level selection mechanisms and with consistent viewing behavior of monkeys being a subset of the consistent viewing behavior of humans. PMID:28077512

  1. Evaluating Organic Aerosol Model Performance: Impact of two Embedded Assumptions

    NASA Astrophysics Data System (ADS)

    Jiang, W.; Giroux, E.; Roth, H.; Yin, D.

    2004-05-01

    Organic aerosols are important due to their abundance in the polluted lower atmosphere and their impact on human health and vegetation. However, modeling organic aerosols is a very challenging task because of the complexity of aerosol composition, structure, and formation processes. Assumptions and their associated uncertainties in both models and measurement data make model performance evaluation a truly demanding job. Although some assumptions are obvious, others are hidden and embedded, and can significantly impact modeling results, possibly even changing conclusions about model performance. This paper focuses on analyzing the impact of two embedded assumptions on evaluation of organic aerosol model performance. One assumption is about the enthalpy of vaporization widely used in various secondary organic aerosol (SOA) algorithms. The other is about the conversion factor used to obtain ambient organic aerosol concentrations from measured organic carbon. These two assumptions reflect uncertainties in the model and in the ambient measurement data, respectively. For illustration purposes, various choices of the assumed values are implemented in the evaluation process for an air quality model based on CMAQ (the Community Multiscale Air Quality Model). Model simulations are conducted for the Lower Fraser Valley covering Southwest British Columbia, Canada, and Northwest Washington, United States, for a historical pollution episode in 1993. To understand the impact of the assumed enthalpy of vaporization on modeling results, its impact on instantaneous organic aerosol yields (IAY) through partitioning coefficients is analysed first. The analysis shows that utilizing different enthalpy of vaporization values causes changes in the shapes of IAY curves and in the response of SOA formation capability of reactive organic gases to temperature variations. These changes are then carried into the air quality model and cause substantial changes in the organic aerosol modeling results. In another aspect, using different assumed factors to convert measured organic carbon to organic aerosol concentrations cause substantial variations in the processed ambient data themselves, which are normally used as performance targets for model evaluations. The combination of uncertainties in the modeling results and in the moving performance targets causes major uncertainties in the final conclusion about the model performance. Without further information, the best thing that a modeler can do is to choose a combination of the assumed values from the sensible parameter ranges available in the literature, based on the best match of the modeling results with the processed measurement data. However, the best match of the modeling results with the processed measurement data may not necessarily guarantee that the model itself is rigorous and the model performance is robust. Conclusions on the model performance can only be reached with sufficient understanding of the uncertainties and their impact.

  2. Impact of Violation of the Missing-at-Random Assumption on Full-Information Maximum Likelihood Method in Multidimensional Adaptive Testing

    ERIC Educational Resources Information Center

    Han, Kyung T.; Guo, Fanmin

    2014-01-01

    The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…

  3. A Bayesian Multilevel Model for Microcystin Prediction in ...

    EPA Pesticide Factsheets

    The frequency of cyanobacteria blooms in North American lakes is increasing. A major concernwith rising cyanobacteria blooms is microcystin, a common cyanobacterial hepatotoxin. Toexplore the conditions that promote high microcystin concentrations, we analyzed the US EPANational Lake Assessment (NLA) dataset collected in the summer of 2007. The NLA datasetis reported for nine eco-regions. We used the results of random forest modeling as a means ofvariable selection from which we developed a Bayesian multilevel model of microcystin concentrations.Model parameters under a multilevel modeling framework are eco-region specific, butthey are also assumed to be exchangeable across eco-regions for broad continental scaling. Theexchangeability assumption ensures that both the common patterns and eco-region specific featureswill be reflected in the model. Furthermore, the method incorporates appropriate estimatesof uncertainty. Our preliminary results show associations between microcystin and turbidity, totalnutrients, and N:P ratios. The NLA 2012 will be used for Bayesian updating. The results willhelp develop management strategies to alleviate microcystin impacts and improve lake quality. This work provides a probabilistic framework for predicting microcystin presences in lakes. It would allow for insights to be made about how changes in nutrient concentrations could potentially change toxin levels.

  4. A New Framework for Cumulus Parametrization - A CPT in action

    NASA Astrophysics Data System (ADS)

    Jakob, C.; Peters, K.; Protat, A.; Kumar, V.

    2016-12-01

    The representation of convection in climate model remains a major Achilles Heel in our pursuit of better predictions of global and regional climate. The basic principle underpinning the parametrisation of tropical convection in global weather and climate models is that there exist discernible interactions between the resolved model scale and the parametrised cumulus scale. Furthermore, there must be at least some predictive power in the larger scales for the statistical behaviour on small scales for us to be able to formally close the parametrised equations. The presentation will discuss a new framework for cumulus parametrisation based on the idea of separating the prediction of cloud area from that of velocity. This idea is put into practice by combining an existing multi-scale stochastic cloud model with observations to arrive at the prediction of the area fraction for deep precipitating convection. Using mid-tropospheric humidity and vertical motion as predictors, the model is shown to reproduce the observed behaviour of both mean and variability of deep convective area fraction well. The framework allows for the inclusion of convective organisation and can - in principle - be made resolution-aware or resolution-independent. When combined with simple assumptions about cloud-base vertical motion the model can be used as a closure assumption in any existing cumulus parametrisation. Results of applying this idea in the the ECHAM model indicate significant improvements in the simulation of tropical variability, including but not limited to the MJO. This presentation will highlight how the close collaboration of the observational, theoretical and model development community in the spirit of the climate process teams can lead to significant progress in long-standing issues in climate modelling while preserving the freedom of individual groups in pursuing their specific implementation of an agreed framework.

  5. Some Comments on Mapping from Disease-Specific to Generic Health-Related Quality-of-Life Scales

    PubMed Central

    Palta, Mari

    2013-01-01

    An article by Lu et al. in this issue of Value in Health addresses the mapping of treatment or group differences in disease-specific measures (DSMs) of health-related quality of life onto differences in generic health-related quality-of-life scores, with special emphasis on how the mapping is affected by the reliability of the DSM. In the proposed mapping, a factor analytic model defines a conversion factor between the scores as the ratio of factor loadings. Hence, the mapping applies to convert true underlying scales and has desirable properties facilitating the alignment of instruments and understanding their relationship in a coherent manner. It is important to note, however, that when DSM means or differences in mean DSMs are estimated, their mapping is still of a measurement error–prone predictor, and the correct conversion coefficient is the true mapping multiplied by the reliability of the DSM in the relevant sample. In addition, the proposed strategy for estimating the factor analytic mapping in practice requires assumptions that may not hold. We discuss these assumptions and how they may be the reason we obtain disparate estimates of the mapping factor in an application of the proposed methods to groups of patients. PMID:23337233

  6. Extending Data Worth Analyses to Select Multiple Observations Targeting Multiple Forecasts.

    PubMed

    Vilhelmsen, Troels N; Ferré, Ty P A

    2018-05-01

    Hydrological models are often set up to provide specific forecasts of interest. Owing to the inherent uncertainty in data used to derive model structure and used to constrain parameter variations, the model forecasts will be uncertain. Additional data collection is often performed to minimize this forecast uncertainty. Given our common financial restrictions, it is critical that we identify data with maximal information content with respect to forecast of interest. In practice, this often devolves to qualitative decisions based on expert opinion. However, there is no assurance that this will lead to optimal design, especially for complex hydrogeological problems. Specifically, these complexities include considerations of multiple forecasts, shared information among potential observations, information content of existing data, and the assumptions and simplifications underlying model construction. In the present study, we extend previous data worth analyses to include: simultaneous selection of multiple new measurements and consideration of multiple forecasts of interest. We show how the suggested approach can be used to optimize data collection. This can be used in a manner that suggests specific measurement sets or that produces probability maps indicating areas likely to be informative for specific forecasts. Moreover, we provide examples documenting that sequential measurement election approaches often lead to suboptimal designs and that estimates of data covariance should be included when selecting future measurement sets. © 2017, National Ground Water Association.

  7. Testing Modeling Assumptions in the West Africa Ebola Outbreak

    NASA Astrophysics Data System (ADS)

    Burghardt, Keith; Verzijl, Christopher; Huang, Junming; Ingram, Matthew; Song, Binyang; Hasne, Marie-Pierre

    2016-10-01

    The Ebola virus in West Africa has infected almost 30,000 and killed over 11,000 people. Recent models of Ebola Virus Disease (EVD) have often made assumptions about how the disease spreads, such as uniform transmissibility and homogeneous mixing within a population. In this paper, we test whether these assumptions are necessarily correct, and offer simple solutions that may improve disease model accuracy. First, we use data and models of West African migration to show that EVD does not homogeneously mix, but spreads in a predictable manner. Next, we estimate the initial growth rate of EVD within country administrative divisions and find that it significantly decreases with population density. Finally, we test whether EVD strains have uniform transmissibility through a novel statistical test, and find that certain strains appear more often than expected by chance.

  8. A Conceptual Framework for Predicting Error in Complex Human-Machine Environments

    NASA Technical Reports Server (NTRS)

    Freed, Michael; Remington, Roger; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    We present a Goals, Operators, Methods, and Selection Rules-Model Human Processor (GOMS-MHP) style model-based approach to the problem of predicting human habit capture errors. Habit captures occur when the model fails to allocate limited cognitive resources to retrieve task-relevant information from memory. Lacking the unretrieved information, decision mechanisms act in accordance with implicit default assumptions, resulting in error when relied upon assumptions prove incorrect. The model helps interface designers identify situations in which such failures are especially likely.

  9. Implicit assumptions underlying simple harvest models of marine bird populations can mislead environmental management decisions.

    PubMed

    O'Brien, Susan H; Cook, Aonghais S C P; Robinson, Robert A

    2017-10-01

    Assessing the potential impact of additional mortality from anthropogenic causes on animal populations requires detailed demographic information. However, these data are frequently lacking, making simple algorithms, which require little data, appealing. Because of their simplicity, these algorithms often rely on implicit assumptions, some of which may be quite restrictive. Potential Biological Removal (PBR) is a simple harvest model that estimates the number of additional mortalities that a population can theoretically sustain without causing population extinction. However, PBR relies on a number of implicit assumptions, particularly around density dependence and population trajectory that limit its applicability in many situations. Among several uses, it has been widely employed in Europe in Environmental Impact Assessments (EIA), to examine the acceptability of potential effects of offshore wind farms on marine bird populations. As a case study, we use PBR to estimate the number of additional mortalities that a population with characteristics typical of a seabird population can theoretically sustain. We incorporated this level of additional mortality within Leslie matrix models to test assumptions within the PBR algorithm about density dependence and current population trajectory. Our analyses suggest that the PBR algorithm identifies levels of mortality which cause population declines for most population trajectories and forms of population regulation. Consequently, we recommend that practitioners do not use PBR in an EIA context for offshore wind energy developments. Rather than using simple algorithms that rely on potentially invalid implicit assumptions, we recommend use of Leslie matrix models for assessing the impact of additional mortality on a population, enabling the user to explicitly define assumptions and test their importance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. A statistical analysis of the dependency of closure assumptions in cumulus parameterization on the horizontal resolution

    NASA Technical Reports Server (NTRS)

    Xu, Kuan-Man

    1994-01-01

    Simulated data from the UCLA cumulus ensemble model are used to investigate the quasi-universal validity of closure assumptions used in existing cumulus parameterizations. A closure assumption is quasi-universally valid if it is sensitive neither to convective cloud regimes nor to horizontal resolutions of large-scale/mesoscale models. The dependency of three types of closure assumptions, as classified by Arakawa and Chen, on the horizontal resolution is addressed in this study. Type I is the constraint on the coupling of the time tendencies of large-scale temperature and water vapor mixing ratio. Type II is the constraint on the coupling of cumulus heating and cumulus drying. Type III is a direct constraint on the intensity of a cumulus ensemble. The macroscopic behavior of simulated cumulus convection is first compared with the observed behavior in view of Type I and Type II closure assumptions using 'quick-look' and canonical correlation analyses. It is found that they are statistically similar to each other. The three types of closure assumptions are further examined with simulated data averaged over selected subdomain sizes ranging from 64 to 512 km. It is found that the dependency of Type I and Type II closure assumptions on the horizontal resolution is very weak and that Type III closure assumption is somewhat dependent upon the horizontal resolution. The influences of convective and mesoscale processes on the closure assumptions are also addressed by comparing the structures of canonical components with the corresponding vertical profiles in the convective and stratiform regions of cumulus ensembles analyzed directly from simulated data. The implication of these results for cumulus parameterization is discussed.

  11. A CCA+ICA based model for multi-task brain imaging data fusion and its application to schizophrenia.

    PubMed

    Sui, Jing; Adali, Tülay; Pearlson, Godfrey; Yang, Honghui; Sponheim, Scott R; White, Tonya; Calhoun, Vince D

    2010-05-15

    Collection of multiple-task brain imaging data from the same subject has now become common practice in medical imaging studies. In this paper, we propose a simple yet effective model, "CCA+ICA", as a powerful tool for multi-task data fusion. This joint blind source separation (BSS) model takes advantage of two multivariate methods: canonical correlation analysis and independent component analysis, to achieve both high estimation accuracy and to provide the correct connection between two datasets in which sources can have either common or distinct between-dataset correlation. In both simulated and real fMRI applications, we compare the proposed scheme with other joint BSS models and examine the different modeling assumptions. The contrast images of two tasks: sensorimotor (SM) and Sternberg working memory (SB), derived from a general linear model (GLM), were chosen to contribute real multi-task fMRI data, both of which were collected from 50 schizophrenia patients and 50 healthy controls. When examining the relationship with duration of illness, CCA+ICA revealed a significant negative correlation with temporal lobe activation. Furthermore, CCA+ICA located sensorimotor cortex as the group-discriminative regions for both tasks and identified the superior temporal gyrus in SM and prefrontal cortex in SB as task-specific group-discriminative brain networks. In summary, we compared the new approach to some competitive methods with different assumptions, and found consistent results regarding each of their hypotheses on connecting the two tasks. Such an approach fills a gap in existing multivariate methods for identifying biomarkers from brain imaging data.

  12. A unifying kinetic framework for modeling oxidoreductase-catalyzed reactions.

    PubMed

    Chang, Ivan; Baldi, Pierre

    2013-05-15

    Oxidoreductases are a fundamental class of enzymes responsible for the catalysis of oxidation-reduction reactions, crucial in most bioenergetic metabolic pathways. From their common root in the ancient prebiotic environment, oxidoreductases have evolved into diverse and elaborate protein structures with specific kinetic properties and mechanisms adapted to their individual functional roles and environmental conditions. While accurate kinetic modeling of oxidoreductases is thus important, current models suffer from limitations to the steady-state domain, lack empirical validation or are too specialized to a single system or set of conditions. To address these limitations, we introduce a novel unifying modeling framework for kinetic descriptions of oxidoreductases. The framework is based on a set of seven elementary reactions that (i) form the basis for 69 pairs of enzyme state transitions for encoding various specific microscopic intra-enzyme reaction networks (micro-models), and (ii) lead to various specific macroscopic steady-state kinetic equations (macro-models) via thermodynamic assumptions. Thus, a synergistic bridge between the micro and macro kinetics can be achieved, enabling us to extract unitary rate constants, simulate reaction variance and validate the micro-models using steady-state empirical data. To help facilitate the application of this framework, we make available RedoxMech: a Mathematica™ software package that automates the generation and customization of micro-models. The Mathematica™ source code for RedoxMech, the documentation and the experimental datasets are all available from: http://www.igb.uci.edu/tools/sb/metabolic-modeling. pfbaldi@ics.uci.edu Supplementary data are available at Bioinformatics online.

  13. Social Value Induction and Cooperation in the Centipede Game

    PubMed Central

    2016-01-01

    The Centipede game provides a dynamic model of cooperation and competition in repeated dyadic interactions. Two experiments investigated psychological factors driving cooperation in 20 rounds of a Centipede game with significant monetary incentives and anonymous and random re-pairing of players after every round. The main purpose of the research was to determine whether the pattern of strategic choices observed when no specific social value orientation is experimentally induced—the standard condition in all previous investigations of behavior in the Centipede and most other experimental games—is essentially individualistic, the orthodox game-theoretic assumption being that players are individualistically motivated in the absence of any specific motivational induction. Participants in whom no specific state social value orientation was induced exhibited moderately non-cooperative play that differed significantly from the pattern found when an individualistic orientation was induced. In both experiments, the neutral treatment condition, in which no orientation was induced, elicited competitive behavior resembling behavior in the condition in which a competitive orientation was explicitly induced. Trait social value orientation, measured with a questionnaire, influenced cooperation differently depending on the experimentally induced state social value orientation. Cooperative trait social value orientation was a significant predictor of cooperation and, to a lesser degree, experimentally induced competitive orientation was a significant predictor of non-cooperation. The experimental results imply that the standard assumption of individualistic motivation in experimental games may not be valid, and that the results of such investigations need to take into account the possibility that players are competitively motivated. PMID:27010385

  14. PROcess Based Diagnostics PROBE

    NASA Technical Reports Server (NTRS)

    Clune, T.; Schmidt, G.; Kuo, K.; Bauer, M.; Oloso, H.

    2013-01-01

    Many of the aspects of the climate system that are of the greatest interest (e.g., the sensitivity of the system to external forcings) are emergent properties that arise via the complex interplay between disparate processes. This is also true for climate models most diagnostics are not a function of an isolated portion of source code, but rather are affected by multiple components and procedures. Thus any model-observation mismatch is hard to attribute to any specific piece of code or imperfection in a specific model assumption. An alternative approach is to identify diagnostics that are more closely tied to specific processes -- implying that if a mismatch is found, it should be much easier to identify and address specific algorithmic choices that will improve the simulation. However, this approach requires looking at model output and observational data in a more sophisticated way than the more traditional production of monthly or annual mean quantities. The data must instead be filtered in time and space for examples of the specific process being targeted.We are developing a data analysis environment called PROcess-Based Explorer (PROBE) that seeks to enable efficient and systematic computation of process-based diagnostics on very large sets of data. In this environment, investigators can define arbitrarily complex filters and then seamlessly perform computations in parallel on the filtered output from their model. The same analysis can be performed on additional related data sets (e.g., reanalyses) thereby enabling routine comparisons between model and observational data. PROBE also incorporates workflow technology to automatically update computed diagnostics for subsequent executions of a model. In this presentation, we will discuss the design and current status of PROBE as well as share results from some preliminary use cases.

  15. The Robustness of LOGIST and BILOG IRT Estimation Programs to Violations of Local Independence.

    ERIC Educational Resources Information Center

    Ackerman, Terry A.

    One of the important underlying assumptions of all item response theory (IRT) models is that of local independence. This assumption requires that the response to an item on a test not be influenced by the response to any other items. This assumption is often taken for granted, with little or no scrutiny of the response process required to answer…

  16. The Risk GP Model: the standard model of prediction in medicine.

    PubMed

    Fuller, Jonathan; Flores, Luis J

    2015-12-01

    With the ascent of modern epidemiology in the Twentieth Century came a new standard model of prediction in public health and clinical medicine. In this article, we describe the structure of the model. The standard model uses epidemiological measures-most commonly, risk measures-to predict outcomes (prognosis) and effect sizes (treatment) in a patient population that can then be transformed into probabilities for individual patients. In the first step, a risk measure in a study population is generalized or extrapolated to a target population. In the second step, the risk measure is particularized or transformed to yield probabilistic information relevant to a patient from the target population. Hence, we call the approach the Risk Generalization-Particularization (Risk GP) Model. There are serious problems at both stages, especially with the extent to which the required assumptions will hold and the extent to which we have evidence for the assumptions. Given that there are other models of prediction that use different assumptions, we should not inflexibly commit ourselves to one standard model. Instead, model pluralism should be standard in medical prediction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. On the derivation of approximations to cellular automata models and the assumption of independence.

    PubMed

    Davies, K J; Green, J E F; Bean, N G; Binder, B J; Ross, J V

    2014-07-01

    Cellular automata are discrete agent-based models, generally used in cell-based applications. There is much interest in obtaining continuum models that describe the mean behaviour of the agents in these models. Previously, continuum models have been derived for agents undergoing motility and proliferation processes, however, these models only hold under restricted conditions. In order to narrow down the reason for these restrictions, we explore three possible sources of error in deriving the model. These sources are the choice of limiting arguments, the use of a discrete-time model as opposed to a continuous-time model and the assumption of independence between the state of sites. We present a rigorous analysis in order to gain a greater understanding of the significance of these three issues. By finding a limiting regime that accurately approximates the conservation equation for the cellular automata, we are able to conclude that the inaccuracy between our approximation and the cellular automata is completely based on the assumption of independence. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Overview of physical models of liquid entrainment in annular gas-liquid flow

    NASA Astrophysics Data System (ADS)

    Cherdantsev, Andrey V.

    2018-03-01

    A number of recent papers devoted to development of physically-based models for prediction of liquid entrainment in annular regime of two-phase flow are analyzed. In these models shearing-off the crests of disturbance waves by the gas drag force is supposed to be the physical mechanism of entrainment phenomenon. The models are based on a number of assumptions on wavy structure, including inception of disturbance waves due to Kelvin-Helmholtz instability, linear velocity profile inside liquid film and high degree of three-dimensionality of disturbance waves. Validity of the assumptions is analyzed by comparison to modern experimental observations. It was shown that nearly every assumption is in strong qualitative and quantitative disagreement with experiments, which leads to massive discrepancies between the modeled and real properties of the disturbance waves. As a result, such models over-predict the entrained fraction by several orders of magnitude. The discrepancy is usually reduced using various kinds of empirical corrections. This, combined with empiricism already included in the models, turns the models into another kind of empirical correlations rather than physically-based models.

  19. Recognition and source memory as multivariate decision processes.

    PubMed

    Banks, W P

    2000-07-01

    Recognition memory, source memory, and exclusion performance are three important domains of study in memory, each with its own findings, it specific theoretical developments, and its separate research literature. It is proposed here that results from all three domains can be treated with a single analytic model. This article shows how to generate a comprehensive memory representation based on multidimensional signal detection theory and how to make predictions for each of these paradigms using decision axes drawn through the space. The detection model is simpler than the comparable multinomial model, it is more easily generalizable, and it does not make threshold assumptions. An experiment using the same memory set for all three tasks demonstrates the analysis and tests the model. The results show that some seemingly complex relations between the paradigms derive from an underlying simplicity of structure.

  20. Improved Temperature Dynamic Model of Turbine Subcomponents for Facilitation of Generalized Tip Clearance Control

    NASA Technical Reports Server (NTRS)

    Kypuros, Javier A.; Colson, Rodrigo; Munoz, Afredo

    2004-01-01

    This paper describes efforts conducted to improve dynamic temperature estimations of a turbine tip clearance system to facilitate design of a generalized tip clearance controller. This work builds upon research previously conducted and presented in and focuses primarily on improving dynamic temperature estimations of the primary components affecting tip clearance (i.e. the rotor, blades, and casing/shroud). The temperature profiles estimated by the previous model iteration, specifically for the rotor and blades, were found to be inaccurate and, more importantly, insufficient to facilitate controller design. Some assumptions made to facilitate the previous results were not valid, and thus improvements are presented here to better match the physical reality. As will be shown, the improved temperature sub- models, match a commercially validated model and are sufficiently simplified to aid in controller design.

Top