Sample records for effective modeling approach

  1. Factor Models for Ordinal Variables With Covariate Effects on the Manifest and Latent Variables: A Comparison of LISREL and IRT Approaches

    ERIC Educational Resources Information Center

    Moustaki, Irini; Joreskog, Karl G.; Mavridis, Dimitris

    2004-01-01

    We consider a general type of model for analyzing ordinal variables with covariate effects and 2 approaches for analyzing data for such models, the item response theory (IRT) approach and the PRELIS-LISREL (PLA) approach. We compare these 2 approaches on the basis of 2 examples, 1 involving only covariate effects directly on the ordinal variables…

  2. Estimating, Testing, and Comparing Specific Effects in Structural Equation Models: The Phantom Model Approach

    ERIC Educational Resources Information Center

    Macho, Siegfried; Ledermann, Thomas

    2011-01-01

    The phantom model approach for estimating, testing, and comparing specific effects within structural equation models (SEMs) is presented. The rationale underlying this novel method consists in representing the specific effect to be assessed as a total effect within a separate latent variable model, the phantom model that is added to the main…

  3. Cross-validation analysis for genetic evaluation models for ranking in endurance horses.

    PubMed

    García-Ballesteros, S; Varona, L; Valera, M; Gutiérrez, J P; Cervantes, I

    2018-01-01

    Ranking trait was used as a selection criterion for competition horses to estimate racing performance. In the literature the most common approaches to estimate breeding values are the linear or threshold statistical models. However, recent studies have shown that a Thurstonian approach was able to fix the race effect (competitive level of the horses that participate in the same race), thus suggesting a better prediction accuracy of breeding values for ranking trait. The aim of this study was to compare the predictability of linear, threshold and Thurstonian approaches for genetic evaluation of ranking in endurance horses. For this purpose, eight genetic models were used for each approach with different combinations of random effects: rider, rider-horse interaction and environmental permanent effect. All genetic models included gender, age and race as systematic effects. The database that was used contained 4065 ranking records from 966 horses and that for the pedigree contained 8733 animals (47% Arabian horses), with an estimated heritability around 0.10 for the ranking trait. The prediction ability of the models for racing performance was evaluated using a cross-validation approach. The average correlation between real and predicted performances across genetic models was around 0.25 for threshold, 0.58 for linear and 0.60 for Thurstonian approaches. Although no significant differences were found between models within approaches, the best genetic model included: the rider and rider-horse random effects for threshold, only rider and environmental permanent effects for linear approach and all random effects for Thurstonian approach. The absolute correlations of predicted breeding values among models were higher between threshold and Thurstonian: 0.90, 0.91 and 0.88 for all animals, top 20% and top 5% best animals. For rank correlations these figures were 0.85, 0.84 and 0.86. The lower values were those between linear and threshold approaches (0.65, 0.62 and 0.51). In conclusion, the Thurstonian approach is recommended for the routine genetic evaluations for ranking in endurance horses.

  4. An Overview of the Effectiveness of Adolescent Substance Abuse Treatment Models.

    ERIC Educational Resources Information Center

    Muck, Randolph; Zempolich, Kristin A.; Titus, Janet C.; Fishman, Marc; Godley, Mark D.; Schwebel, Robert

    2001-01-01

    Describes current approaches to adolescent substance abuse treatment, including the 12-step treatment approach, behavioral treatment approach, family-based treatment approach, and therapeutic community approach. Summarizes research that assesses the effectiveness of these models, offering findings from the Center for Substance Abuse Treatment's…

  5. A Review on the Models of Organizational Effectiveness: A Look at Cameron's Model in Higher Education

    ERIC Educational Resources Information Center

    Ashraf, Giti; Kadir, Suhaida bte Abd

    2012-01-01

    Organizational effectiveness is the main concern of all higher education institutes. Over the years there have been many different models of effectiveness along with the criteria for measuring organizational effectiveness. In this paper, four main models of organizational effectiveness namely the goal approach, the system resource approach, the…

  6. DAMS: A Model to Assess Domino Effects by Using Agent-Based Modeling and Simulation.

    PubMed

    Zhang, Laobing; Landucci, Gabriele; Reniers, Genserik; Khakzad, Nima; Zhou, Jianfeng

    2017-12-19

    Historical data analysis shows that escalation accidents, so-called domino effects, have an important role in disastrous accidents in the chemical and process industries. In this study, an agent-based modeling and simulation approach is proposed to study the propagation of domino effects in the chemical and process industries. Different from the analytical or Monte Carlo simulation approaches, which normally study the domino effect at probabilistic network levels, the agent-based modeling technique explains the domino effects from a bottom-up perspective. In this approach, the installations involved in a domino effect are modeled as agents whereas the interactions among the installations (e.g., by means of heat radiation) are modeled via the basic rules of the agents. Application of the developed model to several case studies demonstrates the ability of the model not only in modeling higher-level domino effects and synergistic effects but also in accounting for temporal dependencies. The model can readily be applied to large-scale complicated cases. © 2017 Society for Risk Analysis.

  7. Lagged PM2.5 effects in mortality time series: Critical impact of covariate model

    EPA Science Inventory

    The two most common approaches to modeling the effects of air pollution on mortality are the Harvard and the Johns Hopkins (NMMAPS) approaches. These two approaches, which use different sets of covariates, result in dissimilar estimates of the effect of lagged fine particulate ma...

  8. Modeling of correlated data with informative cluster sizes: An evaluation of joint modeling and within-cluster resampling approaches.

    PubMed

    Zhang, Bo; Liu, Wei; Zhang, Zhiwei; Qu, Yanping; Chen, Zhen; Albert, Paul S

    2017-08-01

    Joint modeling and within-cluster resampling are two approaches that are used for analyzing correlated data with informative cluster sizes. Motivated by a developmental toxicity study, we examined the performances and validity of these two approaches in testing covariate effects in generalized linear mixed-effects models. We show that the joint modeling approach is robust to the misspecification of cluster size models in terms of Type I and Type II errors when the corresponding covariates are not included in the random effects structure; otherwise, statistical tests may be affected. We also evaluate the performance of the within-cluster resampling procedure and thoroughly investigate the validity of it in modeling correlated data with informative cluster sizes. We show that within-cluster resampling is a valid alternative to joint modeling for cluster-specific covariates, but it is invalid for time-dependent covariates. The two methods are applied to a developmental toxicity study that investigated the effect of exposure to diethylene glycol dimethyl ether.

  9. Prediction of Size Effects in Notched Laminates Using Continuum Damage Mechanics

    NASA Technical Reports Server (NTRS)

    Camanho, D. P.; Maimi, P.; Davila, C. G.

    2007-01-01

    This paper examines the use of a continuum damage model to predict strength and size effects in notched carbon-epoxy laminates. The effects of size and the development of a fracture process zone before final failure are identified in an experimental program. The continuum damage model is described and the resulting predictions of size effects are compared with alternative approaches: the point stress and the inherent flaw models, the Linear-Elastic Fracture Mechanics approach, and the strength of materials approach. The results indicate that the continuum damage model is the most accurate technique to predict size effects in composites. Furthermore, the continuum damage model does not require any calibration and it is applicable to general geometries and boundary conditions.

  10. Cost-effectiveness Analysis in R Using a Multi-state Modeling Survival Analysis Framework: A Tutorial.

    PubMed

    Williams, Claire; Lewsey, James D; Briggs, Andrew H; Mackay, Daniel F

    2017-05-01

    This tutorial provides a step-by-step guide to performing cost-effectiveness analysis using a multi-state modeling approach. Alongside the tutorial, we provide easy-to-use functions in the statistics package R. We argue that this multi-state modeling approach using a package such as R has advantages over approaches where models are built in a spreadsheet package. In particular, using a syntax-based approach means there is a written record of what was done and the calculations are transparent. Reproducing the analysis is straightforward as the syntax just needs to be run again. The approach can be thought of as an alternative way to build a Markov decision-analytic model, which also has the option to use a state-arrival extended approach. In the state-arrival extended multi-state model, a covariate that represents patients' history is included, allowing the Markov property to be tested. We illustrate the building of multi-state survival models, making predictions from the models and assessing fits. We then proceed to perform a cost-effectiveness analysis, including deterministic and probabilistic sensitivity analyses. Finally, we show how to create 2 common methods of visualizing the results-namely, cost-effectiveness planes and cost-effectiveness acceptability curves. The analysis is implemented entirely within R. It is based on adaptions to functions in the existing R package mstate to accommodate parametric multi-state modeling that facilitates extrapolation of survival curves.

  11. A comparison of methods for estimating the random effects distribution of a linear mixed model.

    PubMed

    Ghidey, Wendimagegn; Lesaffre, Emmanuel; Verbeke, Geert

    2010-12-01

    This article reviews various recently suggested approaches to estimate the random effects distribution in a linear mixed model, i.e. (1) the smoothing by roughening approach of Shen and Louis,(1) (2) the semi-non-parametric approach of Zhang and Davidian,(2) (3) the heterogeneity model of Verbeke and Lesaffre( 3) and (4) a flexible approach of Ghidey et al. (4) These four approaches are compared via an extensive simulation study. We conclude that for the considered cases, the approach of Ghidey et al. (4) often shows to have the smallest integrated mean squared error for estimating the random effects distribution. An analysis of a longitudinal dental data set illustrates the performance of the methods in a practical example.

  12. Provision of hearing aids to children in Bangladesh: costs and cost-effectiveness of a community-based and a centre-based approach.

    PubMed

    Ekman, Björn; Borg, Johan

    2017-08-01

    The aim of this study is to provide evidence on the costs and health effects of two alternative hearing aid delivery models, a community-based and a centre-based approach. The study is set in Bangladesh and the study population is children between 12 and 18 years old. Data on resource use by participants and their caregivers were collected by a household survey. Follow-up data were collected after two months. Data on the costs to providers of the two approaches were collected by means of key informant interviews. The total cost per participant in the community-based model was BDT 6,333 (USD 79) compared with BDT 13,718 (USD 172) for the centre-based model. Both delivery models are found to be cost-effective with an estimated cost per DALY averted of BDT 17,611 (USD 220) for the community-based model and BDT 36,775 (USD 460) for the centre-based model. Using a community-based approach to deliver hearing aids to children in a resource constrained environment is a cost-effective alternative to the traditional centre-based approach. Further evidence is needed to draw conclusions for scale-up of approaches; rigorous analysis is possible using well-prepared data collection tools and working closely with sector professionals. Implications for Rehabilitation Delivery models vary by resources needed for their implementation. Community-based deliver models of hearing aids to children in low-income countries are a cost-effective alternative. The assessment of costs and effects of hearing aids delivery models in low-income countries is possible through planned collaboration between researchers and sector professionals.

  13. Bayesian Variable Selection for Hierarchical Gene-Environment and Gene-Gene Interactions

    PubMed Central

    Liu, Changlu; Ma, Jianzhong; Amos, Christopher I.

    2014-01-01

    We propose a Bayesian hierarchical mixture model framework that allows us to investigate the genetic and environmental effects, gene by gene interactions and gene by environment interactions in the same model. Our approach incorporates the natural hierarchical structure between the main effects and interaction effects into a mixture model, such that our methods tend to remove the irrelevant interaction effects more effectively, resulting in more robust and parsimonious models. We consider both strong and weak hierarchical models. For a strong hierarchical model, both of the main effects between interacting factors must be present for the interactions to be considered in the model development, while for a weak hierarchical model, only one of the two main effects is required to be present for the interaction to be evaluated. Our simulation results show that the proposed strong and weak hierarchical mixture models work well in controlling false positive rates and provide a powerful approach for identifying the predisposing effects and interactions in gene-environment interaction studies, in comparison with the naive model that does not impose this hierarchical constraint in most of the scenarios simulated. We illustrated our approach using data for lung cancer and cutaneous melanoma. PMID:25154630

  14. EFFECTS OF CHRONIC STRESS ON WILDLIFE POPULATIONS: A POPULATION MODELING APPROACH AND CASE STUDY

    EPA Science Inventory

    This chapter describes a matrix modeling approach to characterize and project risks to wildlife populations subject to chronic stress. Population matrix modeling was used to estimate effects of one class of environmental contaminants, dioxin-like compounds (DLCs), to populations ...

  15. Cost-effectiveness analysis of diarrhoea management approaches in Nigeria: A decision analytical model

    PubMed Central

    Ekwunife, Obinna I.

    2017-01-01

    Background Diarrhoea is a leading cause of death in Nigerian children under 5 years. Implementing the most cost-effective approach to diarrhoea management in Nigeria will help optimize health care resources allocation. This study evaluated the cost-effectiveness of various approaches to diarrhoea management namely: the ‘no treatment’ approach (NT); the preventive approach with rotavirus vaccine; the integrated management of childhood illness for diarrhoea approach (IMCI); and rotavirus vaccine plus integrated management of childhood illness for diarrhoea approach (rotavirus vaccine + IMCI). Methods Markov cohort model conducted from the payer’s perspective was used to calculate the cost-effectiveness of the four interventions. The markov model simulated a life cycle of 260 weeks for 33 million children under five years at risk of having diarrhoea (well state). Disability adjusted life years (DALYs) averted was used to quantify clinical outcome. Incremental cost-effectiveness ratio (ICER) served as measure of cost-effectiveness. Results Based on cost-effectiveness threshold of $2,177.99 (i.e. representing Nigerian GDP/capita), all the approaches were very cost-effective but rotavirus vaccine approach was dominated. While IMCI has the lowest ICER of $4.6/DALY averted, the addition of rotavirus vaccine was cost-effective with an ICER of $80.1/DALY averted. Rotavirus vaccine alone was less efficient in optimizing health care resource allocation. Conclusion Rotavirus vaccine + IMCI approach was the most cost-effective approach to childhood diarrhoea management. Its awareness and practice should be promoted in Nigeria. Addition of rotavirus vaccine should be considered for inclusion in the national programme of immunization. Although our findings suggest that addition of rotavirus vaccine to IMCI for diarrhoea is cost-effective, there may be need for further vaccine demonstration studies or real life studies to establish the cost-effectiveness of the vaccine in Nigeria. PMID:29261649

  16. Causal Models for Mediation Analysis: An Introduction to Structural Mean Models.

    PubMed

    Zheng, Cheng; Atkins, David C; Zhou, Xiao-Hua; Rhew, Isaac C

    2015-01-01

    Mediation analyses are critical to understanding why behavioral interventions work. To yield a causal interpretation, common mediation approaches must make an assumption of "sequential ignorability." The current article describes an alternative approach to causal mediation called structural mean models (SMMs). A specific SMM called a rank-preserving model (RPM) is introduced in the context of an applied example. Particular attention is given to the assumptions of both approaches to mediation. Applying both mediation approaches to the college student drinking data yield notable differences in the magnitude of effects. Simulated examples reveal instances in which the traditional approach can yield strongly biased results, whereas the RPM approach remains unbiased in these cases. At the same time, the RPM approach has its own assumptions that must be met for correct inference, such as the existence of a covariate that strongly moderates the effect of the intervention on the mediator and no unmeasured confounders that also serve as a moderator of the effect of the intervention or the mediator on the outcome. The RPM approach to mediation offers an alternative way to perform mediation analysis when there may be unmeasured confounders.

  17. Compact modeling of total ionizing dose and aging effects in MOS technologies

    DOE PAGES

    Esqueda, Ivan S.; Barnaby, Hugh J.; King, Michael Patrick

    2015-06-18

    This paper presents a physics-based compact modeling approach that incorporates the impact of total ionizing dose (TID) and stress-induced defects into simulations of metal-oxide-semiconductor (MOS) devices and integrated circuits (ICs). This approach utilizes calculations of surface potential (ψs) to capture the charge contribution from oxide trapped charge and interface traps and to describe their impact on MOS electrostatics and device operating characteristics as a function of ionizing radiation exposure and aging effects. The modeling approach is demonstrated for bulk and silicon-on-insulator (SOI) MOS device. The formulation is verified using TCAD simulations and through the comparison of model calculations and experimentalmore » I-V characteristics from irradiated devices. The presented approach is suitable for modeling TID and aging effects in advanced MOS devices and ICs.« less

  18. Expeditionary Learning Approach in Integrated Teacher Education: Model Effectiveness and Dilemma.

    ERIC Educational Resources Information Center

    Hyun, Eunsook

    This paper introduces an integrated teacher education model based on the Expeditionary Learning Outward Bound Project model. It integrates early childhood, elementary, and special education and uses inquiry-oriented and social constructive approaches. It models a team approach, with all teachers unified in their mutually shared philosophy of…

  19. Combining inferences from models of capture efficiency, detectability, and suitable habitat to classify landscapes for conservation of threatened bull trout

    USGS Publications Warehouse

    Peterson, J.; Dunham, J.B.

    2003-01-01

    Effective conservation efforts for at-risk species require knowledge of the locations of existing populations. Species presence can be estimated directly by conducting field-sampling surveys or alternatively by developing predictive models. Direct surveys can be expensive and inefficient, particularly for rare and difficult-to-sample species, and models of species presence may produce biased predictions. We present a Bayesian approach that combines sampling and model-based inferences for estimating species presence. The accuracy and cost-effectiveness of this approach were compared to those of sampling surveys and predictive models for estimating the presence of the threatened bull trout ( Salvelinus confluentus ) via simulation with existing models and empirical sampling data. Simulations indicated that a sampling-only approach would be the most effective and would result in the lowest presence and absence misclassification error rates for three thresholds of detection probability. When sampling effort was considered, however, the combined approach resulted in the lowest error rates per unit of sampling effort. Hence, lower probability-of-detection thresholds can be specified with the combined approach, resulting in lower misclassification error rates and improved cost-effectiveness.

  20. Toward refined environmental scenarios for ecological risk assessment of down-the-drain chemicals in freshwater environments.

    PubMed

    Franco, Antonio; Price, Oliver R; Marshall, Stuart; Jolliet, Olivier; Van den Brink, Paul J; Rico, Andreu; Focks, Andreas; De Laender, Frederik; Ashauer, Roman

    2017-03-01

    Current regulatory practice for chemical risk assessment suffers from the lack of realism in conventional frameworks. Despite significant advances in exposure and ecological effect modeling, the implementation of novel approaches as high-tier options for prospective regulatory risk assessment remains limited, particularly among general chemicals such as down-the-drain ingredients. While reviewing the current state of the art in environmental exposure and ecological effect modeling, we propose a scenario-based framework that enables a better integration of exposure and effect assessments in a tiered approach. Global- to catchment-scale spatially explicit exposure models can be used to identify areas of higher exposure and to generate ecologically relevant exposure information for input into effect models. Numerous examples of mechanistic ecological effect models demonstrate that it is technically feasible to extrapolate from individual-level effects to effects at higher levels of biological organization and from laboratory to environmental conditions. However, the data required to parameterize effect models that can embrace the complexity of ecosystems are large and require a targeted approach. Experimental efforts should, therefore, focus on vulnerable species and/or traits and ecological conditions of relevance. We outline key research needs to address the challenges that currently hinder the practical application of advanced model-based approaches to risk assessment of down-the-drain chemicals. Integr Environ Assess Manag 2017;13:233-248. © 2016 SETAC. © 2016 SETAC.

  1. Integrated cost-effectiveness analysis of agri-environmental measures for water quality.

    PubMed

    Balana, Bedru B; Jackson-Blake, Leah; Martin-Ortega, Julia; Dunn, Sarah

    2015-09-15

    This paper presents an application of integrated methodological approach for identifying cost-effective combinations of agri-environmental measures to achieve water quality targets. The methodological approach involves linking hydro-chemical modelling with economic costs of mitigation measures. The utility of the approach was explored for the River Dee catchment in North East Scotland, examining the cost-effectiveness of mitigation measures for nitrogen (N) and phosphorus (P) pollutants. In-stream nitrate concentration was modelled using the STREAM-N and phosphorus using INCA-P model. Both models were first run for baseline conditions and then their effectiveness for changes in land management was simulated. Costs were based on farm income foregone, capital and operational expenditures. The costs and effects data were integrated using 'Risk Solver Platform' optimization in excel to produce the most cost-effective combination of measures by which target nutrient reductions could be attained at a minimum economic cost. The analysis identified different combination of measures as most cost-effective for the two pollutants. An important aspect of this paper is integration of model-based effectiveness estimates with economic cost of measures for cost-effectiveness analysis of land and water management options. The methodological approach developed is not limited to the two pollutants and the selected agri-environmental measures considered in the paper; the approach can be adapted to the cost-effectiveness analysis of any catchment-scale environmental management options. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Lattice dynamics approach to determine the dependence of the time-of-flight of transversal polarized acoustic waves on external stress

    NASA Astrophysics Data System (ADS)

    Tarar, K. S.; Pluta, M.; Amjad, U.; Grill, W.

    2011-04-01

    Based on the lattice dynamics approach the dependence of the time-of-flight (TOF) on stress has been modeled for transversal polarized acoustic waves. The relevant dispersion relation is derived from the appropriate mass-spring model together with the dependencies on the restoring forces including the effect of externally applied stress. The lattice dynamics approach can also be interpreted as a discrete and strictly periodic lumped circuit. In that case the modeling represents a finite element approach. In both cases the properties relevant for wavelengths large with respect to the periodic structure can be derived from the respective limit relating also to low frequencies. The model representing a linear chain with stiffness to shear and additional stiffness introduced by extensional stress is presented and compared to existing models, which so far represent each only one of the effects treated here in combination. For a string this effect is well known from musical instruments. The counteracting effects are discussed and compared to experimental results.

  3. A Modeling-Based College Algebra Course and Its Effect on Student Achievement

    ERIC Educational Resources Information Center

    Ellington, Aimee J.

    2005-01-01

    In Fall 2004, Virginia Commonwealth University (VCU) piloted a modeling-based approach to college algebra. This paper describes the course and an assessment that was conducted to determine the effect of this approach on student achievement in comparison to a traditional approach to college algebra. The results show that compared with their…

  4. Dynamics and control of quadcopter using linear model predictive control approach

    NASA Astrophysics Data System (ADS)

    Islam, M.; Okasha, M.; Idres, M. M.

    2017-12-01

    This paper investigates the dynamics and control of a quadcopter using the Model Predictive Control (MPC) approach. The dynamic model is of high fidelity and nonlinear, with six degrees of freedom that include disturbances and model uncertainties. The control approach is developed based on MPC to track different reference trajectories ranging from simple ones such as circular to complex helical trajectories. In this control technique, a linearized model is derived and the receding horizon method is applied to generate the optimal control sequence. Although MPC is computer expensive, it is highly effective to deal with the different types of nonlinearities and constraints such as actuators’ saturation and model uncertainties. The MPC parameters (control and prediction horizons) are selected by trial-and-error approach. Several simulation scenarios are performed to examine and evaluate the performance of the proposed control approach using MATLAB and Simulink environment. Simulation results show that this control approach is highly effective to track a given reference trajectory.

  5. A one-dimensional stochastic approach to the study of cyclic voltammetry with adsorption effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samin, Adib J.

    In this study, a one-dimensional stochastic model based on the random walk approach is used to simulate cyclic voltammetry. The model takes into account mass transport, kinetics of the redox reactions, adsorption effects and changes in the morphology of the electrode. The model is shown to display the expected behavior. Furthermore, the model shows consistent qualitative agreement with a finite difference solution. This approach allows for an understanding of phenomena on a microscopic level and may be useful for analyzing qualitative features observed in experimentally recorded signals.

  6. A one-dimensional stochastic approach to the study of cyclic voltammetry with adsorption effects

    NASA Astrophysics Data System (ADS)

    Samin, Adib J.

    2016-05-01

    In this study, a one-dimensional stochastic model based on the random walk approach is used to simulate cyclic voltammetry. The model takes into account mass transport, kinetics of the redox reactions, adsorption effects and changes in the morphology of the electrode. The model is shown to display the expected behavior. Furthermore, the model shows consistent qualitative agreement with a finite difference solution. This approach allows for an understanding of phenomena on a microscopic level and may be useful for analyzing qualitative features observed in experimentally recorded signals.

  7. Modelling food-web mediated effects of hydrological variability and environmental flows.

    PubMed

    Robson, Barbara J; Lester, Rebecca E; Baldwin, Darren S; Bond, Nicholas R; Drouart, Romain; Rolls, Robert J; Ryder, Darren S; Thompson, Ross M

    2017-11-01

    Environmental flows are designed to enhance aquatic ecosystems through a variety of mechanisms; however, to date most attention has been paid to the effects on habitat quality and life-history triggers, especially for fish and vegetation. The effects of environmental flows on food webs have so far received little attention, despite food-web thinking being fundamental to understanding of river ecosystems. Understanding environmental flows in a food-web context can help scientists and policy-makers better understand and manage outcomes of flow alteration and restoration. In this paper, we consider mechanisms by which flow variability can influence and alter food webs, and place these within a conceptual and numerical modelling framework. We also review the strengths and weaknesses of various approaches to modelling the effects of hydrological management on food webs. Although classic bioenergetic models such as Ecopath with Ecosim capture many of the key features required, other approaches, such as biogeochemical ecosystem modelling, end-to-end modelling, population dynamic models, individual-based models, graph theory models, and stock assessment models are also relevant. In many cases, a combination of approaches will be useful. We identify current challenges and new directions in modelling food-web responses to hydrological variability and environmental flow management. These include better integration of food-web and hydraulic models, taking physiologically-based approaches to food quality effects, and better representation of variations in space and time that may create ecosystem control points. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  8. Empirical Performance Model-Driven Data Layout Optimization and Library Call Selection for Tensor Contraction Expressions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Qingda; Gao, Xiaoyang; Krishnamoorthy, Sriram

    Empirical optimizers like ATLAS have been very effective in optimizing computational kernels in libraries. The best choice of parameters such as tile size and degree of loop unrolling is determined by executing different versions of the computation. In contrast, optimizing compilers use a model-driven approach to program transformation. While the model-driven approach of optimizing compilers is generally orders of magnitude faster than ATLAS-like library generators, its effectiveness can be limited by the accuracy of the performance models used. In this paper, we describe an approach where a class of computations is modeled in terms of constituent operations that are empiricallymore » measured, thereby allowing modeling of the overall execution time. The performance model with empirically determined cost components is used to perform data layout optimization together with the selection of library calls and layout transformations in the context of the Tensor Contraction Engine, a compiler for a high-level domain-specific language for expressing computational models in quantum chemistry. The effectiveness of the approach is demonstrated through experimental measurements on representative computations from quantum chemistry.« less

  9. A Comparison of Two-Stage Approaches for Fitting Nonlinear Ordinary Differential Equation (ODE) Models with Mixed Effects

    PubMed Central

    Chow, Sy-Miin; Bendezú, Jason J.; Cole, Pamela M.; Ram, Nilam

    2016-01-01

    Several approaches currently exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA), generalized local linear approximation (GLLA), and generalized orthogonal local derivative approximation (GOLD). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children’s self-regulation. PMID:27391255

  10. A Comparison of Two-Stage Approaches for Fitting Nonlinear Ordinary Differential Equation Models with Mixed Effects.

    PubMed

    Chow, Sy-Miin; Bendezú, Jason J; Cole, Pamela M; Ram, Nilam

    2016-01-01

    Several approaches exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA; Ramsay & Silverman, 2005 ), generalized local linear approximation (GLLA; Boker, Deboeck, Edler, & Peel, 2010 ), and generalized orthogonal local derivative approximation (GOLD; Deboeck, 2010 ). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo (MC) study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children's self-regulation.

  11. The Effect of Visual Information on the Manual Approach and Landing

    NASA Technical Reports Server (NTRS)

    Wewerinke, P. H.

    1982-01-01

    The effect of visual information in combination with basic display information on the approach performance. A pre-experimental model analysis was performed in terms of the optimal control model. The resulting aircraft approach performance predictions were compared with the results of a moving base simulator program. The results illustrate that the model provides a meaningful description of the visual (scene) perception process involved in the complex (multi-variable, time varying) manual approach task with a useful predictive capability. The theoretical framework was shown to allow a straight-forward investigation of the complex interaction of a variety of task variables.

  12. A dynamic spatio-temporal model for spatial data

    USGS Publications Warehouse

    Hefley, Trevor J.; Hooten, Mevin B.; Hanks, Ephraim M.; Russell, Robin; Walsh, Daniel P.

    2017-01-01

    Analyzing spatial data often requires modeling dependencies created by a dynamic spatio-temporal data generating process. In many applications, a generalized linear mixed model (GLMM) is used with a random effect to account for spatial dependence and to provide optimal spatial predictions. Location-specific covariates are often included as fixed effects in a GLMM and may be collinear with the spatial random effect, which can negatively affect inference. We propose a dynamic approach to account for spatial dependence that incorporates scientific knowledge of the spatio-temporal data generating process. Our approach relies on a dynamic spatio-temporal model that explicitly incorporates location-specific covariates. We illustrate our approach with a spatially varying ecological diffusion model implemented using a computationally efficient homogenization technique. We apply our model to understand individual-level and location-specific risk factors associated with chronic wasting disease in white-tailed deer from Wisconsin, USA and estimate the location the disease was first introduced. We compare our approach to several existing methods that are commonly used in spatial statistics. Our spatio-temporal approach resulted in a higher predictive accuracy when compared to methods based on optimal spatial prediction, obviated confounding among the spatially indexed covariates and the spatial random effect, and provided additional information that will be important for containing disease outbreaks.

  13. Effects of Distance Coaching on Teachers' Use of Pyramid Model Practices: A Pilot Study

    ERIC Educational Resources Information Center

    Artman-Meeker, Kathleen; Hemmeter, Mary Louise; Snyder, Patricia

    2014-01-01

    The purpose of this pilot study was to compare the effects of 2 professional development approaches on teachers' implementation of the "Pyramid" model, a classroom-wide approach for fostering social-emotional development and addressing challenging behavior. The study had 2 goals: (a) to examine the differential effects of workshop…

  14. A hybrid modelling approach for predicting ground vibration from trains

    NASA Astrophysics Data System (ADS)

    Triepaischajonsak, N.; Thompson, D. J.

    2015-01-01

    The prediction of ground vibration from trains presents a number of difficulties. The ground is effectively an infinite medium, often with a layered structure and with properties that may vary greatly from one location to another. The vibration from a passing train forms a transient event, which limits the usefulness of steady-state frequency domain models. Moreover, there is often a need to consider vehicle/track interaction in more detail than is commonly used in frequency domain models, such as the 2.5D approach, while maintaining the computational efficiency of the latter. However, full time-domain approaches involve large computation times, particularly where three-dimensional ground models are required. Here, a hybrid modelling approach is introduced. The vehicle/track interaction is calculated in the time domain in order to be able t account directly for effects such as the discrete sleeper spacing. Forces acting on the ground are extracted from this first model and used in a second model to predict the ground response at arbitrary locations. In the present case the second model is a layered ground model operating in the frequency domain. Validation of the approach is provided by comparison with an existing frequency domain model. The hybrid model is then used to study the sleeper-passing effect, which is shown to be less significant than excitation due to track unevenness in all the cases considered.

  15. Courses of action for effects based operations using evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Haider, Sajjad; Levis, Alexander H.

    2006-05-01

    This paper presents an Evolutionary Algorithms (EAs) based approach to identify effective courses of action (COAs) in Effects Based Operations. The approach uses Timed Influence Nets (TINs) as the underlying mathematical model to capture a dynamic uncertain situation. TINs provide a concise graph-theoretic probabilistic approach to specify the cause and effect relationships that exist among the variables of interest (actions, desired effects, and other uncertain events) in a problem domain. The purpose of building these TIN models is to identify and analyze several alternative courses of action. The current practice is to use trial and error based techniques which are not only labor intensive but also produce sub-optimal results and are not capable of modeling constraints among actionable events. The EA based approach presented in this paper is aimed to overcome these limitations. The approach generates multiple COAs that are close enough in terms of achieving the desired effect. The purpose of generating multiple COAs is to give several alternatives to a decision maker. Moreover, the alternate COAs could be generalized based on the relationships that exist among the actions and their execution timings. The approach also allows a system analyst to capture certain types of constraints among actionable events.

  16. Modelling the mating system of polar bears: a mechanistic approach to the Allee effect.

    PubMed

    Molnár, Péter K; Derocher, Andrew E; Lewis, Mark A; Taylor, Mitchell K

    2008-01-22

    Allee effects may render exploited animal populations extinction prone, but empirical data are often lacking to describe the circumstances leading to an Allee effect. Arbitrary assumptions regarding Allee effects could lead to erroneous management decisions so that predictive modelling approaches are needed that identify the circumstances leading to an Allee effect before such a scenario occurs. We present a predictive approach of Allee effects for polar bears where low population densities, an unpredictable habitat and harvest-depleted male populations result in infrequent mating encounters. We develop a mechanistic model for the polar bear mating system that predicts the proportion of fertilized females at the end of the mating season given population density and operational sex ratio. The model is parametrized using pairing data from Lancaster Sound, Canada, and describes the observed pairing dynamics well. Female mating success is shown to be a nonlinear function of the operational sex ratio, so that a sudden and rapid reproductive collapse could occur if males are severely depleted. The operational sex ratio where an Allee effect is expected is dependent on population density. We focus on the prediction of Allee effects in polar bears but our approach is also applicable to other species.

  17. Introducing a novel interaction model structure for the combined effect of temperature and pH on the microbial growth rate.

    PubMed

    Akkermans, Simen; Noriega Fernandez, Estefanía; Logist, Filip; Van Impe, Jan F

    2017-01-02

    Efficient modelling of the microbial growth rate can be performed by combining the effects of individual conditions in a multiplicative way, known as the gamma concept. However, several studies have illustrated that interactions between different effects should be taken into account at stressing environmental conditions to achieve a more accurate description of the growth rate. In this research, a novel approach for modeling the interactions between the effects of environmental conditions on the microbial growth rate is introduced. As a case study, the effect of temperature and pH on the growth rate of Escherichia coli K12 is modeled, based on a set of computer controlled bioreactor experiments performed under static environmental conditions. The models compared in this case study are the gamma model, the model of Augustin and Carlier (2000), the model of Le Marc et al. (2002) and the novel multiplicative interaction model, developed in this paper. This novel model enables the separate identification of interactions between the effects of two (or more) environmental conditions. The comparison of these models focuses on the accuracy, interpretability and compatibility with efficient modeling approaches. Moreover, for the separate effects of temperature and pH, new cardinal parameter model structures are proposed. The novel interaction model contributes to a generic modeling approach, resulting in predictive models that are (i) accurate, (ii) easily identifiable with a limited work load, (iii) modular, and (iv) biologically interpretable. Copyright © 2016. Published by Elsevier B.V.

  18. Modelling approaches for pipe inclination effect on deposition limit velocity of settling slurry flow

    NASA Astrophysics Data System (ADS)

    Matoušek, Václav; Kesely, Mikoláš; Vlasák, Pavel

    2018-06-01

    The deposition velocity is an important operation parameter in hydraulic transport of solid particles in pipelines. It represents flow velocity at which transported particles start to settle out at the bottom of the pipe and are no longer transported. A number of predictive models has been developed to determine this threshold velocity for slurry flows of different solids fractions (fractions of different grain size and density). Most of the models consider flow in a horizontal pipe only, modelling approaches for inclined flows are extremely scarce due partially to a lack of experimental information about the effect of pipe inclination on the slurry flow pattern and behaviour. We survey different approaches to modelling of particle deposition in flowing slurry and discuss mechanisms on which deposition-limit models are based. Furthermore, we analyse possibilities to incorporate the effect of flow inclination into the predictive models and select the most appropriate ones based on their ability to modify the modelled deposition mechanisms to conditions associated with the flow inclination. A usefulness of the selected modelling approaches and their modifications are demonstrated by comparing model predictions with experimental results for inclined slurry flows from our own laboratory and from the literature.

  19. The stochastic system approach for estimating dynamic treatments effect.

    PubMed

    Commenges, Daniel; Gégout-Petit, Anne

    2015-10-01

    The problem of assessing the effect of a treatment on a marker in observational studies raises the difficulty that attribution of the treatment may depend on the observed marker values. As an example, we focus on the analysis of the effect of a HAART on CD4 counts, where attribution of the treatment may depend on the observed marker values. This problem has been treated using marginal structural models relying on the counterfactual/potential response formalism. Another approach to causality is based on dynamical models, and causal influence has been formalized in the framework of the Doob-Meyer decomposition of stochastic processes. Causal inference however needs assumptions that we detail in this paper and we call this approach to causality the "stochastic system" approach. First we treat this problem in discrete time, then in continuous time. This approach allows incorporating biological knowledge naturally. When working in continuous time, the mechanistic approach involves distinguishing the model for the system and the model for the observations. Indeed, biological systems live in continuous time, and mechanisms can be expressed in the form of a system of differential equations, while observations are taken at discrete times. Inference in mechanistic models is challenging, particularly from a numerical point of view, but these models can yield much richer and reliable results.

  20. An integrated model of learning.

    PubMed

    Trigg, A M; Cordova, F D

    1987-01-01

    Worldwide, most educational systems are based on three levels of education that utilize the pedagogical approaches to learning. In the 1960s, scholars formulated another approach to education that has become known as andragogy and has been applied to adult education. Several innovative scholars have seen how andragogy can be applied to teaching children. As a result, both andragogy and pedagogy are viewed as the opposite ends of the educational spectrum. Both of these approaches have a place and function within the modern educational framework. If one assumes that the goal of education is for the acquisition and application of knowledge, then both of these approaches can be used effectively for the attainment of that goal. In order to utilize these approaches effectively, an integrated model of learning has been developed that consists of initial teaching and exploratory learning phases. This model has both the directive and flexible qualities found in the theories of pedagogy and andragogy. With careful consideration and analysis this educational model can be utilized effectively within most educational systems.

  1. Multilevel joint competing risk models

    NASA Astrophysics Data System (ADS)

    Karunarathna, G. H. S.; Sooriyarachchi, M. R.

    2017-09-01

    Joint modeling approaches are often encountered for different outcomes of competing risk time to event and count in many biomedical and epidemiology studies in the presence of cluster effect. Hospital length of stay (LOS) has been the widely used outcome measure in hospital utilization due to the benchmark measurement for measuring multiple terminations such as discharge, transferred, dead and patients who have not completed the event of interest at the follow up period (censored) during hospitalizations. Competing risk models provide a method of addressing such multiple destinations since classical time to event models yield biased results when there are multiple events. In this study, the concept of joint modeling has been applied to the dengue epidemiology in Sri Lanka, 2006-2008 to assess the relationship between different outcomes of LOS and platelet count of dengue patients with the district cluster effect. Two key approaches have been applied to build up the joint scenario. In the first approach, modeling each competing risk separately using the binary logistic model, treating all other events as censored under the multilevel discrete time to event model, while the platelet counts are assumed to follow a lognormal regression model. The second approach is based on the endogeneity effect in the multilevel competing risks and count model. Model parameters were estimated using maximum likelihood based on the Laplace approximation. Moreover, the study reveals that joint modeling approach yield more precise results compared to fitting two separate univariate models, in terms of AIC (Akaike Information Criterion).

  2. The comparative cost-effectiveness of an equity-focused approach to child survival, health, and nutrition: a modelling approach.

    PubMed

    Carrera, Carlos; Azrack, Adeline; Begkoyian, Genevieve; Pfaffmann, Jerome; Ribaira, Eric; O'Connell, Thomas; Doughty, Patricia; Aung, Kyaw Myint; Prieto, Lorena; Rasanathan, Kumanan; Sharkey, Alyssa; Chopra, Mickey; Knippenberg, Rudolf

    2012-10-13

    Progress on child mortality and undernutrition has seen widening inequities and a concentration of child deaths and undernutrition in the most deprived communities, threatening the achievement of the Millennium Development Goals. Conversely, a series of recent process and technological innovations have provided effective and efficient options to reach the most deprived populations. These trends raise the possibility that the perceived trade-off between equity and efficiency no longer applies for child health--that prioritising services for the poorest and most marginalised is now more effective and cost effective than mainstream approaches. We tested this hypothesis with a mathematical-modelling approach by comparing the cost-effectiveness in terms of child deaths and stunting events averted between two approaches (from 2011-15 in 14 countries and one province): an equity-focused approach that prioritises the most deprived communities, and a mainstream approach that is representative of current strategies. We combined some existing models, notably the Marginal Budgeting for Bottlenecks Toolkit and the Lives Saved Tool, to do our analysis. We showed that, with the same level of investment, disproportionately higher effects are possible by prioritising the poorest and most marginalised populations, for averting both child mortality and stunting. Our results suggest that an equity-focused approach could result in sharper decreases in child mortality and stunting and higher cost-effectiveness than mainstream approaches, while reducing inequities in effective intervention coverage, health outcomes, and out-of-pocket spending between the most and least deprived groups and geographic areas within countries. Our findings should be interpreted with caution due to uncertainties around some of the model parameters and baseline data. Further research is needed to address some of these gaps in the evidence base. Strategies for improving child nutrition and survival, however, should account for an increasing prioritisation of the most deprived communities and the increased use of community-based interventions. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Online Synchronous vs. Asynchronous Software Training through the Behavioral Modeling Approach: A Longitudinal Field Experiment

    ERIC Educational Resources Information Center

    Chen, Charlie C.; Shaw, Ruey-shiang

    2006-01-01

    The continued and increasing use of online training raises the question of whether the most effective training methods applied in live instruction will carry over to different online environments in the long run. Behavior Modeling (BM) approach--teaching through demonstration--has been proven as the most effective approach in a face-to-face (F2F)…

  4. Three Methods of Estimating a Model of Group Effects: A Comparison with Reference to School Effect Studies.

    ERIC Educational Resources Information Center

    Igra, Amnon

    1980-01-01

    Three methods of estimating a model of school effects are compared: ordinary least squares; an approach based on the analysis of covariance; and, a residualized input-output approach. Results are presented using a matrix algebra formulation, and advantages of the first two methods are considered. (Author/GK)

  5. A Group Decision Approach to Developing Concept-Effect Models for Diagnosing Student Learning Problems in Mathematics

    ERIC Educational Resources Information Center

    Hwang, Gwo-Jen; Panjaburee, Patcharin; Triampo, Wannapong; Shih, Bo-Ying

    2013-01-01

    Diagnosing student learning barriers has been recognized as the most fundamental and important issue for improving the learning achievements of students. In the past decade, several learning diagnosis approaches have been proposed based on the concept-effect relationship (CER) model. However, past studies have shown that the effectiveness of this…

  6. Formulation of consumables management models. Development approach for the mission planning processor working model

    NASA Technical Reports Server (NTRS)

    Connelly, L. C.

    1977-01-01

    The mission planning processor is a user oriented tool for consumables management and is part of the total consumables subsystem management concept. The approach to be used in developing a working model of the mission planning processor is documented. The approach includes top-down design, structured programming techniques, and application of NASA approved software development standards. This development approach: (1) promotes cost effective software development, (2) enhances the quality and reliability of the working model, (3) encourages the sharing of the working model through a standard approach, and (4) promotes portability of the working model to other computer systems.

  7. An integrated modeling approach for estimating the water quality benefits of conservation practices at the river basin scale

    USDA-ARS?s Scientific Manuscript database

    The USDA initiated the Conservation Effects Assessment Project (CEAP) to quantify the environmental benefits of conservation practices at regional and national scales. For this assessment, a sampling and modeling approach is used. This paper provides a technical overview of the modeling approach use...

  8. An alternative approach for modeling strength differential effect in sheet metals with symmetric yield functions

    NASA Astrophysics Data System (ADS)

    Kurukuri, Srihari; Worswick, Michael J.

    2013-12-01

    An alternative approach is proposed to utilize symmetric yield functions for modeling the tension-compression asymmetry commonly observed in hcp materials. In this work, the strength differential (SD) effect is modeled by choosing separate symmetric plane stress yield functions (for example, Barlat Yld 2000-2d) for the tension i.e., in the first quadrant of principal stress space, and compression i.e., third quadrant of principal stress space. In the second and fourth quadrants, the yield locus is constructed by adopting interpolating functions between uniaxial tensile and compressive stress states. In this work, different interpolating functions are chosen and the predictive capability of each approach is discussed. The main advantage of this proposed approach is that the yield locus parameters are deterministic and relatively easy to identify when compared to the Cazacu family of yield functions commonly used for modeling SD effect observed in hcp materials.

  9. Three novel approaches to structural identifiability analysis in mixed-effects models.

    PubMed

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2016-05-06

    Structural identifiability is a concept that considers whether the structure of a model together with a set of input-output relations uniquely determines the model parameters. In the mathematical modelling of biological systems, structural identifiability is an important concept since biological interpretations are typically made from the parameter estimates. For a system defined by ordinary differential equations, several methods have been developed to analyse whether the model is structurally identifiable or otherwise. Another well-used modelling framework, which is particularly useful when the experimental data are sparsely sampled and the population variance is of interest, is mixed-effects modelling. However, established identifiability analysis techniques for ordinary differential equations are not directly applicable to such models. In this paper, we present and apply three different methods that can be used to study structural identifiability in mixed-effects models. The first method, called the repeated measurement approach, is based on applying a set of previously established statistical theorems. The second method, called the augmented system approach, is based on augmenting the mixed-effects model to an extended state-space form. The third method, called the Laplace transform mixed-effects extension, is based on considering the moment invariants of the systems transfer function as functions of random variables. To illustrate, compare and contrast the application of the three methods, they are applied to a set of mixed-effects models. Three structural identifiability analysis methods applicable to mixed-effects models have been presented in this paper. As method development of structural identifiability techniques for mixed-effects models has been given very little attention, despite mixed-effects models being widely used, the methods presented in this paper provides a way of handling structural identifiability in mixed-effects models previously not possible. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Electromagnetic Pulse (EMP) Survivability of Telecommunications Assets

    DTIC Science & Technology

    1987-02-06

    Program Approach ES-2 2-1 EMP Mitigation Program Approach 2-2 2-2 Approach To The Assessment Of EMP Effects On Networks 2-3 2-3 Zone-Boundary Model 2-5...2-4 Approach To The Assessment Of EMP Effects On Networks 2-6 4-1 Zone-Boundary Model And Communications Network Elements 4-1 4-2 Important Features...data and theoretical analyses to arrive at the conclusions presented in this report. TMTrademark of Western Electric Co., Inc. ES-4 - i S

  11. Linkage of exposure and effects using genomics, proteomics and metabolomics in small fish models (presentation)

    EPA Science Inventory

    This research project combines the use of whole organism endpoints, genomic, proteomic and metabolomic approaches, and computational modeling in a systems biology approach to 1) identify molecular indicators of exposure and biomarkers of effect to EDCs representing several modes/...

  12. Effect of primary and secondary parameters on analytical estimation of effective thermal conductivity of two phase materials using unit cell approach

    NASA Astrophysics Data System (ADS)

    S, Chidambara Raja; P, Karthikeyan; Kumaraswamidhas, L. A.; M, Ramu

    2018-05-01

    Most of the thermal design systems involve two phase materials and analysis of such systems requires detailed understanding of the thermal characteristics of the two phase material. This article aimed to develop geometry dependent unit cell approach model by considering the effects of all primary parameters (conductivity ratio and concentration) and secondary parameters (geometry, contact resistance, natural convection, Knudsen and radiation) for the estimation of effective thermal conductivity of two-phase materials. The analytical equations have been formulated based on isotherm approach for 2-D and 3-D spatially periodic medium. The developed models are validated with standard models and suited for all kind of operating conditions. The results have shown substantial improvement compared to the existing models and are in good agreement with the experimental data.

  13. Estimation of indirect effect when the mediator is a censored variable.

    PubMed

    Wang, Jian; Shete, Sanjay

    2017-01-01

    A mediation model explores the direct and indirect effects of an initial variable ( X) on an outcome variable ( Y) by including a mediator ( M). In many realistic scenarios, investigators observe censored data instead of the complete data. Current research in mediation analysis for censored data focuses mainly on censored outcomes, but not censored mediators. In this study, we proposed a strategy based on the accelerated failure time model and a multiple imputation approach. We adapted a measure of the indirect effect for the mediation model with a censored mediator, which can assess the indirect effect at both the group and individual levels. Based on simulation, we established the bias in the estimations of different paths (i.e. the effects of X on M [ a], of M on Y [ b] and of X on Y given mediator M [ c']) and indirect effects when analyzing the data using the existing approaches, including a naïve approach implemented in software such as Mplus, complete-case analysis, and the Tobit mediation model. We conducted simulation studies to investigate the performance of the proposed strategy compared to that of the existing approaches. The proposed strategy accurately estimates the coefficients of different paths, indirect effects and percentages of the total effects mediated. We applied these mediation approaches to the study of SNPs, age at menopause and fasting glucose levels. Our results indicate that there is no indirect effect of association between SNPs and fasting glucose level that is mediated through the age at menopause.

  14. Separating direct and indirect effects of global change: a population dynamic modeling approach using readily available field data.

    PubMed

    Farrer, Emily C; Ashton, Isabel W; Knape, Jonas; Suding, Katharine N

    2014-04-01

    Two sources of complexity make predicting plant community response to global change particularly challenging. First, realistic global change scenarios involve multiple drivers of environmental change that can interact with one another to produce non-additive effects. Second, in addition to these direct effects, global change drivers can indirectly affect plants by modifying species interactions. In order to tackle both of these challenges, we propose a novel population modeling approach, requiring only measurements of abundance and climate over time. To demonstrate the applicability of this approach, we model population dynamics of eight abundant plant species in a multifactorial global change experiment in alpine tundra where we manipulated nitrogen, precipitation, and temperature over 7 years. We test whether indirect and interactive effects are important to population dynamics and whether explicitly incorporating species interactions can change predictions when models are forecast under future climate change scenarios. For three of the eight species, population dynamics were best explained by direct effect models, for one species neither direct nor indirect effects were important, and for the other four species indirect effects mattered. Overall, global change had negative effects on species population growth, although species responded to different global change drivers, and single-factor effects were slightly more common than interactive direct effects. When the fitted population dynamic models were extrapolated under changing climatic conditions to the end of the century, forecasts of community dynamics and diversity loss were largely similar using direct effect models that do not explicitly incorporate species interactions or best-fit models; however, inclusion of species interactions was important in refining the predictions for two of the species. The modeling approach proposed here is a powerful way of analyzing readily available datasets which should be added to our toolbox to tease apart complex drivers of global change. © 2013 John Wiley & Sons Ltd.

  15. An effective medium approach to modelling the pressure-dependent electrical properties of porous rocks

    NASA Astrophysics Data System (ADS)

    Han, Tongcheng

    2018-07-01

    Understanding the electrical properties of rocks under varying pressure is important for a variety of geophysical applications. This study proposes an approach to modelling the pressure-dependent electrical properties of porous rocks based on an effective medium model. The so-named Textural model uses the aspect ratios and pressure-dependent volume fractions of the pores and the aspect ratio and electrical conductivity of the matrix grains. The pores were represented by randomly oriented stiff and compliant spheroidal shapes with constant aspect ratios, and their pressure-dependent volume fractions were inverted from the measured variation of total porosity with differential pressure using a dual porosity model. The unknown constant stiff and compliant pore aspect ratios and the aspect ratio and electrical conductivity of the matrix grains were inverted by best fitting the modelled electrical formation factor to the measured data. Application of the approach to three sandstone samples covering a broad porosity range showed that the pressure-dependent electrical properties can be satisfactorily modelled by the proposed approach. The results demonstrate that the dual porosity concept is sufficient to explain the electrical properties of porous rocks under pressure through the effective medium model scheme.

  16. A spatial model of bird abundance as adjusted for detection probability

    USGS Publications Warehouse

    Gorresen, P.M.; Mcmillan, G.P.; Camp, R.J.; Pratt, T.K.

    2009-01-01

    Modeling the spatial distribution of animals can be complicated by spatial and temporal effects (i.e. spatial autocorrelation and trends in abundance over time) and other factors such as imperfect detection probabilities and observation-related nuisance variables. Recent advances in modeling have demonstrated various approaches that handle most of these factors but which require a degree of sampling effort (e.g. replication) not available to many field studies. We present a two-step approach that addresses these challenges to spatially model species abundance. Habitat, spatial and temporal variables were handled with a Bayesian approach which facilitated modeling hierarchically structured data. Predicted abundance was subsequently adjusted to account for imperfect detection and the area effectively sampled for each species. We provide examples of our modeling approach for two endemic Hawaiian nectarivorous honeycreepers: 'i'iwi Vestiaria coccinea and 'apapane Himatione sanguinea. ?? 2009 Ecography.

  17. A quantum wave based compact modeling approach for the current in ultra-short DG MOSFETs suitable for rapid multi-scale simulations

    NASA Astrophysics Data System (ADS)

    Hosenfeld, Fabian; Horst, Fabian; Iñíguez, Benjamín; Lime, François; Kloes, Alexander

    2017-11-01

    Source-to-drain (SD) tunneling decreases the device performance in MOSFETs falling below the 10 nm channel length. Modeling quantum mechanical effects including SD tunneling has gained more importance specially for compact model developers. The non-equilibrium Green's function (NEGF) has become a state-of-the-art method for nano-scaled device simulation in the past years. In the sense of a multi-scale simulation approach it is necessary to bridge the gap between compact models with their fast and efficient calculation of the device current, and numerical device models which consider quantum effects of nano-scaled devices. In this work, an NEGF based analytical model for nano-scaled double-gate (DG) MOSFETs is introduced. The model consists of a closed-form potential solution of a classical compact model and a 1D NEGF formalism for calculating the device current, taking into account quantum mechanical effects. The potential calculation omits the iterative coupling and allows the straightforward current calculation. The model is based on a ballistic NEGF approach whereby backscattering effects are considered as second order effect in a closed-form. The accuracy and scalability of the non-iterative DG MOSFET model is inspected in comparison with numerical NanoMOS TCAD data for various channel lengths. With the help of this model investigations on short-channel and temperature effects are performed.

  18. DEVELOPMENT AND APPLICATION OF POPULATION MODELS TO SUPPORT EPA'S ECOLOGICAL RISK ASSESSMENT PROCESSES FOR PESTICIDES

    EPA Science Inventory

    As part of a broader exploratory effort to develop ecological risk assessment approaches to estimate potential chemical effects on non-target populations, we describe an approach for developing simple population models to estimate the extent to which acute effects on individual...

  19. Interviewer effects on non-response propensity in longitudinal surveys: a multilevel modelling approach

    PubMed Central

    Vassallo, Rebecca; Durrant, Gabriele B; Smith, Peter W F; Goldstein, Harvey

    2015-01-01

    The paper investigates two different multilevel approaches, the multilevel cross-classified and the multiple-membership models, for the analysis of interviewer effects on wave non-response in longitudinal surveys. The models proposed incorporate both interviewer and area effects to account for the non-hierarchical structure, the influence of potentially more than one interviewer across waves and possible confounding of area and interviewer effects arising from the non-random allocation of interviewers across areas. The methods are compared by using a data set: the UK Family and Children Survey. PMID:25598587

  20. A Novel Ontology Approach to Support Design for Reliability considering Environmental Effects

    PubMed Central

    Sun, Bo; Li, Yu; Ye, Tianyuan

    2015-01-01

    Environmental effects are not considered sufficiently in product design. Reliability problems caused by environmental effects are very prominent. This paper proposes a method to apply ontology approach in product design. During product reliability design and analysis, environmental effects knowledge reusing is achieved. First, the relationship of environmental effects and product reliability is analyzed. Then environmental effects ontology to describe environmental effects domain knowledge is designed. Related concepts of environmental effects are formally defined by using the ontology approach. This model can be applied to arrange environmental effects knowledge in different environments. Finally, rubber seals used in the subhumid acid rain environment are taken as an example to illustrate ontological model application on reliability design and analysis. PMID:25821857

  1. A novel ontology approach to support design for reliability considering environmental effects.

    PubMed

    Sun, Bo; Li, Yu; Ye, Tianyuan; Ren, Yi

    2015-01-01

    Environmental effects are not considered sufficiently in product design. Reliability problems caused by environmental effects are very prominent. This paper proposes a method to apply ontology approach in product design. During product reliability design and analysis, environmental effects knowledge reusing is achieved. First, the relationship of environmental effects and product reliability is analyzed. Then environmental effects ontology to describe environmental effects domain knowledge is designed. Related concepts of environmental effects are formally defined by using the ontology approach. This model can be applied to arrange environmental effects knowledge in different environments. Finally, rubber seals used in the subhumid acid rain environment are taken as an example to illustrate ontological model application on reliability design and analysis.

  2. Comparison of statistical approaches dealing with time-dependent confounding in drug effectiveness studies.

    PubMed

    Karim, Mohammad Ehsanul; Petkau, John; Gustafson, Paul; Platt, Robert W; Tremlett, Helen

    2018-06-01

    In longitudinal studies, if the time-dependent covariates are affected by the past treatment, time-dependent confounding may be present. For a time-to-event response, marginal structural Cox models are frequently used to deal with such confounding. To avoid some of the problems of fitting marginal structural Cox model, the sequential Cox approach has been suggested as an alternative. Although the estimation mechanisms are different, both approaches claim to estimate the causal effect of treatment by appropriately adjusting for time-dependent confounding. We carry out simulation studies to assess the suitability of the sequential Cox approach for analyzing time-to-event data in the presence of a time-dependent covariate that may or may not be a time-dependent confounder. Results from these simulations revealed that the sequential Cox approach is not as effective as marginal structural Cox model in addressing the time-dependent confounding. The sequential Cox approach was also found to be inadequate in the presence of a time-dependent covariate. We propose a modified version of the sequential Cox approach that correctly estimates the treatment effect in both of the above scenarios. All approaches are applied to investigate the impact of beta-interferon treatment in delaying disability progression in the British Columbia Multiple Sclerosis cohort (1995-2008).

  3. Determination of effective electromagnetic parameters of concentrated suspensions of ellipsoidal particles using Generalized Differential Effective Medium approximation

    NASA Astrophysics Data System (ADS)

    Markov, M.; Levin, V.; Markova, I.

    2018-02-01

    The paper presents an approach to determine the effective electromagnetic parameters of suspensions of ellipsoidal dielectric particles with surface conductivity. This approach takes into account the existence of critical porosity that corresponds to the maximum packing volume fraction of solid inclusions. The approach is based on the Generalized Differential Effective Medium (GDEM) method. We have introduced a model of suspensions containing ellipsoidal inclusions of two types. Inclusions of the first type (phase 1) represent solid grains, and inclusions of the second type (phase 2) contain material with the same physical properties as the host (phase 0). In this model, with increasing porosity the concentration of the host decreases, and it tends to zero near the critical porosity. The proposed model has been used to simulate the effective electromagnetic parameters of concentrated suspensions. We have compared the modeling results for electrical conductivity and dielectric permittivity with the empirical equations. The results obtained have shown that the GDEM model describes the effective electrical conductivity and dielectric permittivity of suspensions in a wide range of inclusion concentrations.

  4. A predictive modeling approach to increasing the economic effectiveness of disease management programs.

    PubMed

    Bayerstadler, Andreas; Benstetter, Franz; Heumann, Christian; Winter, Fabian

    2014-09-01

    Predictive Modeling (PM) techniques are gaining importance in the worldwide health insurance business. Modern PM methods are used for customer relationship management, risk evaluation or medical management. This article illustrates a PM approach that enables the economic potential of (cost-) effective disease management programs (DMPs) to be fully exploited by optimized candidate selection as an example of successful data-driven business management. The approach is based on a Generalized Linear Model (GLM) that is easy to apply for health insurance companies. By means of a small portfolio from an emerging country, we show that our GLM approach is stable compared to more sophisticated regression techniques in spite of the difficult data environment. Additionally, we demonstrate for this example of a setting that our model can compete with the expensive solutions offered by professional PM vendors and outperforms non-predictive standard approaches for DMP selection commonly used in the market.

  5. Modelling of Sub-daily Hydrological Processes Using Daily Time-Step Models: A Distribution Function Approach to Temporal Scaling

    NASA Astrophysics Data System (ADS)

    Kandel, D. D.; Western, A. W.; Grayson, R. B.

    2004-12-01

    Mismatches in scale between the fundamental processes, the model and supporting data are a major limitation in hydrologic modelling. Surface runoff generation via infiltration excess and the process of soil erosion are fundamentally short time-scale phenomena and their average behaviour is mostly determined by the short time-scale peak intensities of rainfall. Ideally, these processes should be simulated using time-steps of the order of minutes to appropriately resolve the effect of rainfall intensity variations. However, sub-daily data support is often inadequate and the processes are usually simulated by calibrating daily (or even coarser) time-step models. Generally process descriptions are not modified but rather effective parameter values are used to account for the effect of temporal lumping, assuming that the effect of the scale mismatch can be counterbalanced by tuning the parameter values at the model time-step of interest. Often this results in parameter values that are difficult to interpret physically. A similar approach is often taken spatially. This is problematic as these processes generally operate or interact non-linearly. This indicates a need for better techniques to simulate sub-daily processes using daily time-step models while still using widely available daily information. A new method applicable to many rainfall-runoff-erosion models is presented. The method is based on temporal scaling using statistical distributions of rainfall intensity to represent sub-daily intensity variations in a daily time-step model. This allows the effect of short time-scale nonlinear processes to be captured while modelling at a daily time-step, which is often attractive due to the wide availability of daily forcing data. The approach relies on characterising the rainfall intensity variation within a day using a cumulative distribution function (cdf). This cdf is then modified by various linear and nonlinear processes typically represented in hydrological and erosion models. The statistical description of sub-daily variability is thus propagated through the model, allowing the effects of variability to be captured in the simulations. This results in cdfs of various fluxes, the integration of which over a day gives respective daily totals. Using 42-plot-years of surface runoff and soil erosion data from field studies in different environments from Australia and Nepal, simulation results from this cdf approach are compared with the sub-hourly (2-minute for Nepal and 6-minute for Australia) and daily models having similar process descriptions. Significant improvements in the simulation of surface runoff and erosion are achieved, compared with a daily model that uses average daily rainfall intensities. The cdf model compares well with a sub-hourly time-step model. This suggests that the approach captures the important effects of sub-daily variability while utilizing commonly available daily information. It is also found that the model parameters are more robustly defined using the cdf approach compared with the effective values obtained at the daily scale. This suggests that the cdf approach may offer improved model transferability spatially (to other areas) and temporally (to other periods).

  6. Estimation of Survival Probabilities for Use in Cost-effectiveness Analyses: A Comparison of a Multi-state Modeling Survival Analysis Approach with Partitioned Survival and Markov Decision-Analytic Modeling

    PubMed Central

    Williams, Claire; Lewsey, James D.; Mackay, Daniel F.; Briggs, Andrew H.

    2016-01-01

    Modeling of clinical-effectiveness in a cost-effectiveness analysis typically involves some form of partitioned survival or Markov decision-analytic modeling. The health states progression-free, progression and death and the transitions between them are frequently of interest. With partitioned survival, progression is not modeled directly as a state; instead, time in that state is derived from the difference in area between the overall survival and the progression-free survival curves. With Markov decision-analytic modeling, a priori assumptions are often made with regard to the transitions rather than using the individual patient data directly to model them. This article compares a multi-state modeling survival regression approach to these two common methods. As a case study, we use a trial comparing rituximab in combination with fludarabine and cyclophosphamide v. fludarabine and cyclophosphamide alone for the first-line treatment of chronic lymphocytic leukemia. We calculated mean Life Years and QALYs that involved extrapolation of survival outcomes in the trial. We adapted an existing multi-state modeling approach to incorporate parametric distributions for transition hazards, to allow extrapolation. The comparison showed that, due to the different assumptions used in the different approaches, a discrepancy in results was evident. The partitioned survival and Markov decision-analytic modeling deemed the treatment cost-effective with ICERs of just over £16,000 and £13,000, respectively. However, the results with the multi-state modeling were less conclusive, with an ICER of just over £29,000. This work has illustrated that it is imperative to check whether assumptions are realistic, as different model choices can influence clinical and cost-effectiveness results. PMID:27698003

  7. Estimation of Survival Probabilities for Use in Cost-effectiveness Analyses: A Comparison of a Multi-state Modeling Survival Analysis Approach with Partitioned Survival and Markov Decision-Analytic Modeling.

    PubMed

    Williams, Claire; Lewsey, James D; Mackay, Daniel F; Briggs, Andrew H

    2017-05-01

    Modeling of clinical-effectiveness in a cost-effectiveness analysis typically involves some form of partitioned survival or Markov decision-analytic modeling. The health states progression-free, progression and death and the transitions between them are frequently of interest. With partitioned survival, progression is not modeled directly as a state; instead, time in that state is derived from the difference in area between the overall survival and the progression-free survival curves. With Markov decision-analytic modeling, a priori assumptions are often made with regard to the transitions rather than using the individual patient data directly to model them. This article compares a multi-state modeling survival regression approach to these two common methods. As a case study, we use a trial comparing rituximab in combination with fludarabine and cyclophosphamide v. fludarabine and cyclophosphamide alone for the first-line treatment of chronic lymphocytic leukemia. We calculated mean Life Years and QALYs that involved extrapolation of survival outcomes in the trial. We adapted an existing multi-state modeling approach to incorporate parametric distributions for transition hazards, to allow extrapolation. The comparison showed that, due to the different assumptions used in the different approaches, a discrepancy in results was evident. The partitioned survival and Markov decision-analytic modeling deemed the treatment cost-effective with ICERs of just over £16,000 and £13,000, respectively. However, the results with the multi-state modeling were less conclusive, with an ICER of just over £29,000. This work has illustrated that it is imperative to check whether assumptions are realistic, as different model choices can influence clinical and cost-effectiveness results.

  8. An Integrated Approach to Damage Accommodation in Flight Control

    NASA Technical Reports Server (NTRS)

    Boskovic, Jovan D.; Knoebel, Nathan; Mehra, Raman K.; Gregory, Irene

    2008-01-01

    In this paper we present an integrated approach to in-flight damage accommodation in flight control. The approach is based on Multiple Models, Switching and Tuning (MMST), and consists of three steps: In the first step the main objective is to acquire a realistic aircraft damage model. Modeling of in-flight damage is a highly complex problem since there is a large number of issues that need to be addressed. One of the most important one is that there is strong coupling between structural dynamics, aerodynamics, and flight control. These effects cannot be studied separately due to this coupling. Once a realistic damage model is available, in the second step a large number of models corresponding to different damage cases are generated. One possibility is to generate many linear models and interpolate between them to cover a large portion of the flight envelope. Once these models have been generated, we will implement a recently developed-Model Set Reduction (MSR) technique. The technique is based on parameterizing damage in terms of uncertain parameters, and uses concepts from robust control theory to arrive at a small number of "centered" models such that the controllers corresponding to these models assure desired stability and robustness properties over a subset in the parametric space. By devising a suitable model placement strategy, the entire parametric set is covered with a relatively small number of models and controllers. The third step consists of designing a Multiple Models, Switching and Tuning (MMST) strategy for estimating the current operating regime (damage case) of the aircraft, and switching to the corresponding controller to achieve effective damage accommodation and the desired performance. In the paper present a comprehensive approach to damage accommodation using Model Set Design,MMST, and Variable Structure compensation for coupling nonlinearities. The approach was evaluated on a model of F/A-18 aircraft dynamics under control effector damage, augmented by nonlinear cross-coupling terms and a structural dynamics model. The proposed approach achieved excellent performance under severe damage effects.

  9. Turbulence Modeling Effects on the Prediction of Equilibrium States of Buoyant Shear Flows

    NASA Technical Reports Server (NTRS)

    Zhao, C. Y.; So, R. M. C.; Gatski, T. B.

    2001-01-01

    The effects of turbulence modeling on the prediction of equilibrium states of turbulent buoyant shear flows were investigated. The velocity field models used include a two-equation closure, a Reynolds-stress closure assuming two different pressure-strain models and three different dissipation rate tensor models. As for the thermal field closure models, two different pressure-scrambling models and nine different temperature variance dissipation rate, Epsilon(0) equations were considered. The emphasis of this paper is focused on the effects of the Epsilon(0)-equation, of the dissipation rate models, of the pressure-strain models and of the pressure-scrambling models on the prediction of the approach to equilibrium turbulence. Equilibrium turbulence is defined by the time rate (if change of the scaled Reynolds stress anisotropic tensor and heat flux vector becoming zero. These conditions lead to the equilibrium state parameters. Calculations show that the Epsilon(0)-equation has a significant effect on the prediction of the approach to equilibrium turbulence. For a particular Epsilon(0)-equation, all velocity closure models considered give an equilibrium state if anisotropic dissipation is accounted for in one form or another in the dissipation rate tensor or in the Epsilon(0)-equation. It is further found that the models considered for the pressure-strain tensor and the pressure-scrambling vector have little or no effect on the prediction of the approach to equilibrium turbulence.

  10. Bayesian inference for psychology, part IV: parameter estimation and Bayes factors.

    PubMed

    Rouder, Jeffrey N; Haaf, Julia M; Vandekerckhove, Joachim

    2018-02-01

    In the psychological literature, there are two seemingly different approaches to inference: that from estimation of posterior intervals and that from Bayes factors. We provide an overview of each method and show that a salient difference is the choice of models. The two approaches as commonly practiced can be unified with a certain model specification, now popular in the statistics literature, called spike-and-slab priors. A spike-and-slab prior is a mixture of a null model, the spike, with an effect model, the slab. The estimate of the effect size here is a function of the Bayes factor, showing that estimation and model comparison can be unified. The salient difference is that common Bayes factor approaches provide for privileged consideration of theoretically useful parameter values, such as the value corresponding to the null hypothesis, while estimation approaches do not. Both approaches, either privileging the null or not, are useful depending on the goals of the analyst.

  11. Intercomparison Of Approaches For Modeling Second Order Ionospheric Corrections Using Gnss Measurements

    NASA Astrophysics Data System (ADS)

    Garcia Fernandez, M.; Butala, M.; Komjathy, A.; Desai, S. D.

    2012-12-01

    Correcting GNSS tracking data for the effects of second order ionospheric effects have been shown to cause a southward shift in GNSS-based precise point positioning solutions by as much as 10 mm, depending on the solar cycle conditions. The most commonly used approaches for modeling the higher order ionospheric effect include, (a) the use of global ionosphere maps to determine vertical total electron content (VTEC) and convert to slant TEC (STEC) assuming a thin shell ionosphere, and (b) using the dual-frequency measurements themselves to determine STEC. The latter approach benefits from not requiring ionospheric mapping functions between VTEC and STEC. However, this approach will require calibrations with receiver and transmitter Differential Code Biases (DCBs). We present results from comparisons of the two approaches. For the first approach, we also compare the use of VTEC observations from IONEX maps compared to climatological model-derived VTEC as provided by the International Reference Ionosphere (IRI2012). We consider various metrics to evaluate the relative performance of the different approaches, including station repeatability, GNSS-based reference frame recovery, and post-fit measurement residuals. Overall, the GIM-based approaches tend to provide lower noise in second order ionosphere correction and positioning solutions. The use of IONEX and IRI2012 models of VTEC provide similar results, especially in periods of low solar activity periods. The use of the IRI2012 model provides a convenient approach for operational scenarios by eliminating the dependence on routine updates of the GIMs, and also serves as a useful source of VTEC when IONEX maps may not be readily available.

  12. MODELING APPROACHES TO POPULATION-LEVEL RISK AESSESSMENT

    EPA Science Inventory

    A SETAC Pellston Workshop on Population-Level Risk Assessment was held in Roskilde, Denmark on 23-27 August 2003. One aspect of this workshop focused on modeling approaches for characterizing population-level effects of chemical exposure. The modeling work group identified th...

  13. Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.

    PubMed

    Chatzis, Sotirios P; Andreou, Andreas S

    2015-11-01

    Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.

  14. Contingency Approaches to Leadership: A Review and Synthesis. Interim Report for Preiod 1 September 1974-1 June 1975.

    ERIC Educational Resources Information Center

    Hendrix, William H.

    This report focuses on the problem of how to improve leadership effectiveness in order to improve overall organization effectiveness. First, three different approaches to leadership behavior are presented: Trait Approach, Behavioral Approach, and Situational Approach. Next, reviews of the leadership literature and of eight contingency models of…

  15. Extensively Parameterized Mutation-Selection Models Reliably Capture Site-Specific Selective Constraint.

    PubMed

    Spielman, Stephanie J; Wilke, Claus O

    2016-11-01

    The mutation-selection model of coding sequence evolution has received renewed attention for its use in estimating site-specific amino acid propensities and selection coefficient distributions. Two computationally tractable mutation-selection inference frameworks have been introduced: One framework employs a fixed-effects, highly parameterized maximum likelihood approach, whereas the other employs a random-effects Bayesian Dirichlet Process approach. While both implementations follow the same model, they appear to make distinct predictions about the distribution of selection coefficients. The fixed-effects framework estimates a large proportion of highly deleterious substitutions, whereas the random-effects framework estimates that all substitutions are either nearly neutral or weakly deleterious. It remains unknown, however, how accurately each method infers evolutionary constraints at individual sites. Indeed, selection coefficient distributions pool all site-specific inferences, thereby obscuring a precise assessment of site-specific estimates. Therefore, in this study, we use a simulation-based strategy to determine how accurately each approach recapitulates the selective constraint at individual sites. We find that the fixed-effects approach, despite its extensive parameterization, consistently and accurately estimates site-specific evolutionary constraint. By contrast, the random-effects Bayesian approach systematically underestimates the strength of natural selection, particularly for slowly evolving sites. We also find that, despite the strong differences between their inferred selection coefficient distributions, the fixed- and random-effects approaches yield surprisingly similar inferences of site-specific selective constraint. We conclude that the fixed-effects mutation-selection framework provides the more reliable software platform for model application and future development. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. Bayesian effect estimation accounting for adjustment uncertainty.

    PubMed

    Wang, Chi; Parmigiani, Giovanni; Dominici, Francesca

    2012-09-01

    Model-based estimation of the effect of an exposure on an outcome is generally sensitive to the choice of which confounding factors are included in the model. We propose a new approach, which we call Bayesian adjustment for confounding (BAC), to estimate the effect of an exposure of interest on the outcome, while accounting for the uncertainty in the choice of confounders. Our approach is based on specifying two models: (1) the outcome as a function of the exposure and the potential confounders (the outcome model); and (2) the exposure as a function of the potential confounders (the exposure model). We consider Bayesian variable selection on both models and link the two by introducing a dependence parameter, ω, denoting the prior odds of including a predictor in the outcome model, given that the same predictor is in the exposure model. In the absence of dependence (ω= 1), BAC reduces to traditional Bayesian model averaging (BMA). In simulation studies, we show that BAC, with ω > 1, estimates the exposure effect with smaller bias than traditional BMA, and improved coverage. We, then, compare BAC, a recent approach of Crainiceanu, Dominici, and Parmigiani (2008, Biometrika 95, 635-651), and traditional BMA in a time series data set of hospital admissions, air pollution levels, and weather variables in Nassau, NY for the period 1999-2005. Using each approach, we estimate the short-term effects of on emergency admissions for cardiovascular diseases, accounting for confounding. This application illustrates the potentially significant pitfalls of misusing variable selection methods in the context of adjustment uncertainty. © 2012, The International Biometric Society.

  17. The Development and Evaluation of Speaking Learning Model by Cooperative Approach

    ERIC Educational Resources Information Center

    Darmuki, Agus; Andayani; Nurkamto, Joko; Saddhono, Kundharu

    2018-01-01

    A cooperative approach-based Speaking Learning Model (SLM) has been developed to improve speaking skill of Higher Education students. This research aimed at evaluating the effectiveness of cooperative-based SLM viewed from the development of student's speaking ability and its effectiveness on speaking activity. This mixed method study combined…

  18. A Comparison of Methods for Estimating Quadratic Effects in Nonlinear Structural Equation Models

    ERIC Educational Resources Information Center

    Harring, Jeffrey R.; Weiss, Brandi A.; Hsu, Jui-Chen

    2012-01-01

    Two Monte Carlo simulations were performed to compare methods for estimating and testing hypotheses of quadratic effects in latent variable regression models. The methods considered in the current study were (a) a 2-stage moderated regression approach using latent variable scores, (b) an unconstrained product indicator approach, (c) a latent…

  19. Testing Two Path Models to Explore Relationships between Students' Experiences of the Teaching-Learning Environment, Approaches to Learning and Academic Achievement

    ERIC Educational Resources Information Center

    Karagiannopoulou, Evangelia; Milienos, Fotios S.

    2015-01-01

    The study explores the relationships between students' experiences of the teaching-learning environment and their approaches to learning, and the effects of these variables on academic achievement. Two three-stage models were tested with structural equation modelling techniques. The "Approaches and Study Skills Inventory for Students"…

  20. Treatment effect heterogeneity for univariate subgroups in clinical trials: Shrinkage, standardization, or else

    PubMed Central

    Varadhan, Ravi; Wang, Sue-Jane

    2016-01-01

    Treatment effect heterogeneity is a well-recognized phenomenon in randomized controlled clinical trials. In this paper, we discuss subgroup analyses with prespecified subgroups of clinical or biological importance. We explore various alternatives to the naive (the traditional univariate) subgroup analyses to address the issues of multiplicity and confounding. Specifically, we consider a model-based Bayesian shrinkage (Bayes-DS) and a nonparametric, empirical Bayes shrinkage approach (Emp-Bayes) to temper the optimism of traditional univariate subgroup analyses; a standardization approach (standardization) that accounts for correlation between baseline covariates; and a model-based maximum likelihood estimation (MLE) approach. The Bayes-DS and Emp-Bayes methods model the variation in subgroup-specific treatment effect rather than testing the null hypothesis of no difference between subgroups. The standardization approach addresses the issue of confounding in subgroup analyses. The MLE approach is considered only for comparison in simulation studies as the “truth” since the data were generated from the same model. Using the characteristics of a hypothetical large outcome trial, we perform simulation studies and articulate the utilities and potential limitations of these estimators. Simulation results indicate that Bayes-DS and Emp-Bayes can protect against optimism present in the naïve approach. Due to its simplicity, the naïve approach should be the reference for reporting univariate subgroup-specific treatment effect estimates from exploratory subgroup analyses. Standardization, although it tends to have a larger variance, is suggested when it is important to address the confounding of univariate subgroup effects due to correlation between baseline covariates. The Bayes-DS approach is available as an R package (DSBayes). PMID:26485117

  1. An Ensemble Approach for Drug Side Effect Prediction

    PubMed Central

    Jahid, Md Jamiul; Ruan, Jianhua

    2014-01-01

    In silico prediction of drug side-effects in early stage of drug development is becoming more popular now days, which not only reduces the time for drug design but also reduces the drug development costs. In this article we propose an ensemble approach to predict drug side-effects of drug molecules based on their chemical structure. Our idea originates from the observation that similar drugs have similar side-effects. Based on this observation we design an ensemble approach that combine the results from different classification models where each model is generated by a different set of similar drugs. We applied our approach to 1385 side-effects in the SIDER database for 888 drugs. Results show that our approach outperformed previously published approaches and standard classifiers. Furthermore, we applied our method to a number of uncharacterized drug molecules in DrugBank database and predict their side-effect profiles for future usage. Results from various sources confirm that our method is able to predict the side-effects for uncharacterized drugs and more importantly able to predict rare side-effects which are often ignored by other approaches. The method described in this article can be useful to predict side-effects in drug design in an early stage to reduce experimental cost and time. PMID:25327524

  2. Finite Element Modeling of the Buckling Response of Sandwich Panels

    NASA Technical Reports Server (NTRS)

    Rose, Cheryl A.; Moore, David F.; Knight, Norman F., Jr.; Rankin, Charles C.

    2002-01-01

    A comparative study of different modeling approaches for predicting sandwich panel buckling response is described. The study considers sandwich panels with anisotropic face sheets and a very thick core. Results from conventional analytical solutions for sandwich panel overall buckling and face-sheet-wrinkling type modes are compared with solutions obtained using different finite element modeling approaches. Finite element solutions are obtained using layered shell element models, with and without transverse shear flexibility, layered shell/solid element models, with shell elements for the face sheets and solid elements for the core, and sandwich models using a recently developed specialty sandwich element. Convergence characteristics of the shell/solid and sandwich element modeling approaches with respect to in-plane and through-the-thickness discretization, are demonstrated. Results of the study indicate that the specialty sandwich element provides an accurate and effective modeling approach for predicting both overall and localized sandwich panel buckling response. Furthermore, results indicate that anisotropy of the face sheets, along with the ratio of principle elastic moduli, affect the buckling response and these effects may not be represented accurately by analytical solutions. Modeling recommendations are also provided.

  3. Predictive models of poly(ethylene-terephthalate) film degradation under multi-factor accelerated weathering exposures

    PubMed Central

    Ngendahimana, David K.; Fagerholm, Cara L.; Sun, Jiayang; Bruckman, Laura S.

    2017-01-01

    Accelerated weathering exposures were performed on poly(ethylene-terephthalate) (PET) films. Longitudinal multi-level predictive models as a function of PET grades and exposure types were developed for the change in yellowness index (YI) and haze (%). Exposures with similar change in YI were modeled using a linear fixed-effects modeling approach. Due to the complex nature of haze formation, measurement uncertainty, and the differences in the samples’ responses, the change in haze (%) depended on individual samples’ responses and a linear mixed-effects modeling approach was used. When compared to fixed-effects models, the addition of random effects in the haze formation models significantly increased the variance explained. For both modeling approaches, diagnostic plots confirmed independence and homogeneity with normally distributed residual errors. Predictive R2 values for true prediction error and predictive power of the models demonstrated that the models were not subject to over-fitting. These models enable prediction under pre-defined exposure conditions for a given exposure time (or photo-dosage in case of UV light exposure). PET degradation under cyclic exposures combining UV light and condensing humidity is caused by photolytic and hydrolytic mechanisms causing yellowing and haze formation. Quantitative knowledge of these degradation pathways enable cross-correlation of these lab-based exposures with real-world conditions for service life prediction. PMID:28498875

  4. Accounting for standard errors of vision-specific latent trait in regression models.

    PubMed

    Wong, Wan Ling; Li, Xiang; Li, Jialiang; Wong, Tien Yin; Cheng, Ching-Yu; Lamoureux, Ecosse L

    2014-07-11

    To demonstrate the effectiveness of Hierarchical Bayesian (HB) approach in a modeling framework for association effects that accounts for SEs of vision-specific latent traits assessed using Rasch analysis. A systematic literature review was conducted in four major ophthalmic journals to evaluate Rasch analysis performed on vision-specific instruments. The HB approach was used to synthesize the Rasch model and multiple linear regression model for the assessment of the association effects related to vision-specific latent traits. The effectiveness of this novel HB one-stage "joint-analysis" approach allows all model parameters to be estimated simultaneously and was compared with the frequently used two-stage "separate-analysis" approach in our simulation study (Rasch analysis followed by traditional statistical analyses without adjustment for SE of latent trait). Sixty-six reviewed articles performed evaluation and validation of vision-specific instruments using Rasch analysis, and 86.4% (n = 57) performed further statistical analyses on the Rasch-scaled data using traditional statistical methods; none took into consideration SEs of the estimated Rasch-scaled scores. The two models on real data differed for effect size estimations and the identification of "independent risk factors." Simulation results showed that our proposed HB one-stage "joint-analysis" approach produces greater accuracy (average of 5-fold decrease in bias) with comparable power and precision in estimation of associations when compared with the frequently used two-stage "separate-analysis" procedure despite accounting for greater uncertainty due to the latent trait. Patient-reported data, using Rasch analysis techniques, do not take into account the SE of latent trait in association analyses. The HB one-stage "joint-analysis" is a better approach, producing accurate effect size estimations and information about the independent association of exposure variables with vision-specific latent traits. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.

  5. Mechanisms and mediation in survival analysis: towards an integrated analytical framework.

    PubMed

    Pratschke, Jonathan; Haase, Trutz; Comber, Harry; Sharp, Linda; de Camargo Cancela, Marianna; Johnson, Howard

    2016-02-29

    A wide-ranging debate has taken place in recent years on mediation analysis and causal modelling, raising profound theoretical, philosophical and methodological questions. The authors build on the results of these discussions to work towards an integrated approach to the analysis of research questions that situate survival outcomes in relation to complex causal pathways with multiple mediators. The background to this contribution is the increasingly urgent need for policy-relevant research on the nature of inequalities in health and healthcare. The authors begin by summarising debates on causal inference, mediated effects and statistical models, showing that these three strands of research have powerful synergies. They review a range of approaches which seek to extend existing survival models to obtain valid estimates of mediation effects. They then argue for an alternative strategy, which involves integrating survival outcomes within Structural Equation Models via the discrete-time survival model. This approach can provide an integrated framework for studying mediation effects in relation to survival outcomes, an issue of great relevance in applied health research. The authors provide an example of how these techniques can be used to explore whether the social class position of patients has a significant indirect effect on the hazard of death from colon cancer. The results suggest that the indirect effects of social class on survival are substantial and negative (-0.23 overall). In addition to the substantial direct effect of this variable (-0.60), its indirect effects account for more than one quarter of the total effect. The two main pathways for this indirect effect, via emergency admission (-0.12), on the one hand, and hospital caseload, on the other, (-0.10) are of similar size. The discrete-time survival model provides an attractive way of integrating time-to-event data within the field of Structural Equation Modelling. The authors demonstrate the efficacy of this approach in identifying complex causal pathways that mediate the effects of a socio-economic baseline covariate on the hazard of death from colon cancer. The results show that this approach has the potential to shed light on a class of research questions which is of particular relevance in health research.

  6. Homogenization limit for a multiband effective mass model in heterostructures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morandi, O., E-mail: morandi@ipcms.unistra.fr

    We study the homogenization limit of a multiband model that describes the quantum mechanical motion of an electron in a quasi-periodic crystal. In this approach, the distance among the atoms that constitute the material (lattice parameter) is considered a small quantity. Our model include the description of materials with variable chemical composition, intergrowth compounds, and heterostructures. We derive the effective multiband evolution system in the framework of the kp approach. We study the well posedness of the mathematical problem. We compare the effective mass model with the standard kp models for uniform and non-uniforms crystals. We show that in themore » limit of vanishing lattice parameter, the particle density obtained by the effective mass model, converges to the exact probability density of the particle.« less

  7. Experimental and Numerical Analysis of Triaxially Braided Composites Utilizing a Modified Subcell Modeling Approach

    NASA Technical Reports Server (NTRS)

    Cater, Christopher; Xiao, Xinran; Goldberg, Robert K.; Kohlman, Lee W.

    2015-01-01

    A combined experimental and analytical approach was performed for characterizing and modeling triaxially braided composites with a modified subcell modeling strategy. Tensile coupon tests were conducted on a [0deg/60deg/-60deg] braided composite at angles of 0deg, 30deg, 45deg, 60deg and 90deg relative to the axial tow of the braid. It was found that measured coupon strength varied significantly with the angle of the applied load and each coupon direction exhibited unique final failures. The subcell modeling approach implemented into the finite element software LS-DYNA was used to simulate the various tensile coupon test angles. The modeling approach was successful in predicting both the coupon strength and reported failure mode for the 0deg, 30deg and 60deg loading directions. The model over-predicted the strength in the 90deg direction; however, the experimental results show a strong influence of free edge effects on damage initiation and failure. In the absence of these local free edge effects, the subcell modeling approach showed promise as a viable and computationally efficient analysis tool for triaxially braided composite structures. Future work will focus on validation of the approach for predicting the impact response of the braided composite against flat panel impact tests.

  8. Experimental and Numerical Analysis of Triaxially Braided Composites Utilizing a Modified Subcell Modeling Approach

    NASA Technical Reports Server (NTRS)

    Cater, Christopher; Xiao, Xinran; Goldberg, Robert K.; Kohlman, Lee W.

    2015-01-01

    A combined experimental and analytical approach was performed for characterizing and modeling triaxially braided composites with a modified subcell modeling strategy. Tensile coupon tests were conducted on a [0deg/60deg/-60deg] braided composite at angles [0deg, 30deg, 45deg, 60deg and 90deg] relative to the axial tow of the braid. It was found that measured coupon strength varied significantly with the angle of the applied load and each coupon direction exhibited unique final failures. The subcell modeling approach implemented into the finite element software LS-DYNA was used to simulate the various tensile coupon test angles. The modeling approach was successful in predicting both the coupon strength and reported failure mode for the 0deg, 30deg and 60deg loading directions. The model over-predicted the strength in the 90deg direction; however, the experimental results show a strong influence of free edge effects on damage initiation and failure. In the absence of these local free edge effects, the subcell modeling approach showed promise as a viable and computationally efficient analysis tool for triaxially braided composite structures. Future work will focus on validation of the approach for predicting the impact response of the braided composite against flat panel impact tests.

  9. Exploring the Impact of Students' Learning Approach on Collaborative Group Modeling of Blood Circulation

    ERIC Educational Resources Information Center

    Lee, Shinyoung; Kang, Eunhee; Kim, Heui-Baik

    2015-01-01

    This study aimed to explore the effect on group dynamics of statements associated with deep learning approaches (DLA) and their contribution to cognitive collaboration and model development during group modeling of blood circulation. A group was selected for an in-depth analysis of collaborative group modeling. This group constructed a model in a…

  10. Interactive Sound Propagation using Precomputation and Statistical Approximations

    NASA Astrophysics Data System (ADS)

    Antani, Lakulish

    Acoustic phenomena such as early reflections, diffraction, and reverberation have been shown to improve the user experience in interactive virtual environments and video games. These effects arise due to repeated interactions between sound waves and objects in the environment. In interactive applications, these effects must be simulated within a prescribed time budget. We present two complementary approaches for computing such acoustic effects in real time, with plausible variation in the sound field throughout the scene. The first approach, Precomputed Acoustic Radiance Transfer, precomputes a matrix that accounts for multiple acoustic interactions between all scene objects. The matrix is used at run time to provide sound propagation effects that vary smoothly as sources and listeners move. The second approach couples two techniques---Ambient Reverberance, and Aural Proxies---to provide approximate sound propagation effects in real time, based on only the portion of the environment immediately visible to the listener. These approaches lie at different ends of a space of interactive sound propagation techniques for modeling sound propagation effects in interactive applications. The first approach emphasizes accuracy by modeling acoustic interactions between all parts of the scene; the second approach emphasizes efficiency by only taking the local environment of the listener into account. These methods have been used to efficiently generate acoustic walkthroughs of architectural models. They have also been integrated into a modern game engine, and can enable realistic, interactive sound propagation on commodity desktop PCs.

  11. Molecular basis of LFER. Modeling of the electronic substituent effect using fragment quantum self-similarity measures.

    PubMed

    Gironés, Xavier; Carbó-Dorca, Ramon; Ponec, Robert

    2003-01-01

    A new approach allowing the theoretical modeling of the electronic substituent effect is proposed. The approach is based on the use of fragment Quantum Self-Similarity Measures (MQS-SM) calculated from domain averaged Fermi Holes as new theoretical descriptors allowing for the replacement of Hammett sigma constants in QSAR models. To demonstrate the applicability of this new approach its formalism was applied to the description of the substituent effect on the dissociation of a broad series of meta and para substituted benzoic acids. The accuracy and the predicting power of this new approach was tested on the comparison with a recent exhaustive study by Sullivan et al. It has been shown that the accuracy and the predicting power of both procedures is comparable, but, in contrast to a five-parameter correlation equation necessary to describe the data in the study, our approach is more simple and, in fact, only a simple one-parameter correlation equation is required.

  12. Multilevel models for cost-effectiveness analyses that use cluster randomised trial data: An approach to model choice.

    PubMed

    Ng, Edmond S-W; Diaz-Ordaz, Karla; Grieve, Richard; Nixon, Richard M; Thompson, Simon G; Carpenter, James R

    2016-10-01

    Multilevel models provide a flexible modelling framework for cost-effectiveness analyses that use cluster randomised trial data. However, there is a lack of guidance on how to choose the most appropriate multilevel models. This paper illustrates an approach for deciding what level of model complexity is warranted; in particular how best to accommodate complex variance-covariance structures, right-skewed costs and missing data. Our proposed models differ according to whether or not they allow individual-level variances and correlations to differ across treatment arms or clusters and by the assumed cost distribution (Normal, Gamma, Inverse Gaussian). The models are fitted by Markov chain Monte Carlo methods. Our approach to model choice is based on four main criteria: the characteristics of the data, model pre-specification informed by the previous literature, diagnostic plots and assessment of model appropriateness. This is illustrated by re-analysing a previous cost-effectiveness analysis that uses data from a cluster randomised trial. We find that the most useful criterion for model choice was the deviance information criterion, which distinguishes amongst models with alternative variance-covariance structures, as well as between those with different cost distributions. This strategy for model choice can help cost-effectiveness analyses provide reliable inferences for policy-making when using cluster trials, including those with missing data. © The Author(s) 2013.

  13. Humanistic Speech Education to Create Leadership Models.

    ERIC Educational Resources Information Center

    Oka, Beverley Jeanne

    A theoretical framework based primarily on the humanistic psychology of Abraham Maslow is used in developing a humanistic approach to speech education. The holistic view of human learning and behavior, inherent in this approach, is seen to be compatible with a model of effective leadership. Specific applications of this approach to speech…

  14. Modeling the Relations among Students' Epistemological Beliefs, Motivation, Learning Approach, and Achievement

    ERIC Educational Resources Information Center

    Kizilgunes, Berna; Tekkaya, Ceren; Sungur, Semra

    2009-01-01

    The authors proposed a model to explain how epistemological beliefs, achievement motivation, and learning approach related to achievement. The authors assumed that epistemological beliefs influence achievement indirectly through their effect on achievement motivation and learning approach. Participants were 1,041 6th-grade students. Results of the…

  15. A new modeling and inference approach for the Systolic Blood Pressure Intervention Trial outcomes.

    PubMed

    Yang, Song; Ambrosius, Walter T; Fine, Lawrence J; Bress, Adam P; Cushman, William C; Raj, Dominic S; Rehman, Shakaib; Tamariz, Leonardo

    2018-06-01

    Background/aims In clinical trials with time-to-event outcomes, usually the significance tests and confidence intervals are based on a proportional hazards model. Thus, the temporal pattern of the treatment effect is not directly considered. This could be problematic if the proportional hazards assumption is violated, as such violation could impact both interim and final estimates of the treatment effect. Methods We describe the application of inference procedures developed recently in the literature for time-to-event outcomes when the treatment effect may or may not be time-dependent. The inference procedures are based on a new model which contains the proportional hazards model as a sub-model. The temporal pattern of the treatment effect can then be expressed and displayed. The average hazard ratio is used as the summary measure of the treatment effect. The test of the null hypothesis uses adaptive weights that often lead to improvement in power over the log-rank test. Results Without needing to assume proportional hazards, the new approach yields results consistent with previously published findings in the Systolic Blood Pressure Intervention Trial. It provides a visual display of the time course of the treatment effect. At four of the five scheduled interim looks, the new approach yields smaller p values than the log-rank test. The average hazard ratio and its confidence interval indicates a treatment effect nearly a year earlier than a restricted mean survival time-based approach. Conclusion When the hazards are proportional between the comparison groups, the new methods yield results very close to the traditional approaches. When the proportional hazards assumption is violated, the new methods continue to be applicable and can potentially be more sensitive to departure from the null hypothesis.

  16. Effects of lidar pulse density and sample size on a model-assisted approach to estimate forest inventory variables

    Treesearch

    Jacob Strunk; Hailemariam Temesgen; Hans-Erik Andersen; James P. Flewelling; Lisa Madsen

    2012-01-01

    Using lidar in an area-based model-assisted approach to forest inventory has the potential to increase estimation precision for some forest inventory variables. This study documents the bias and precision of a model-assisted (regression estimation) approach to forest inventory with lidar-derived auxiliary variables relative to lidar pulse density and the number of...

  17. Progress Report on SAM Reduced-Order Model Development for Thermal Stratification and Mixing during Reactor Transients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, R.

    This report documents the initial progress on the reduced-order flow model developments in SAM for thermal stratification and mixing modeling. Two different modeling approaches are pursued. The first one is based on one-dimensional fluid equations with additional terms accounting for the thermal mixing from both flow circulations and turbulent mixing. The second approach is based on three-dimensional coarse-grid CFD approach, in which the full three-dimensional fluid conservation equations are modeled with closure models to account for the effects of turbulence.

  18. Inverse approaches with lithologic information for a regional groundwater system in southwest Kansas

    USGS Publications Warehouse

    Tsou, Ming‐shu; Perkins, S.P.; Zhan, X.; Whittemore, Donald O.; Zheng, Lingyun

    2006-01-01

    Two practical approaches incorporating lithologic information for groundwater modeling calibration are presented to estimate distributed, cell-based hydraulic conductivity. The first approach is to estimate optimal hydraulic conductivities for geological materials by incorporating thickness distribution of materials into inverse modeling. In the second approach, residuals for the groundwater model solution are minimized according to a globalized Newton method with the aid of a Geographic Information System (GIS) to calculate a cell-wise distribution of hydraulic conductivity. Both approaches honor geologic data and were effective in characterizing the heterogeneity of a regional groundwater modeling system in southwest Kansas. ?? 2005 Elsevier Ltd All rights reserved.

  19. Targeted versus statistical approaches to selecting parameters for modelling sediment provenance

    NASA Astrophysics Data System (ADS)

    Laceby, J. Patrick

    2017-04-01

    One effective field-based approach to modelling sediment provenance is the source fingerprinting technique. Arguably, one of the most important steps for this approach is selecting the appropriate suite of parameters or fingerprints used to model source contributions. Accordingly, approaches to selecting parameters for sediment source fingerprinting will be reviewed. Thereafter, opportunities and limitations of these approaches and some future research directions will be presented. For properties to be effective tracers of sediment, they must discriminate between sources whilst behaving conservatively. Conservative behavior is characterized by constancy in sediment properties, where the properties of sediment sources remain constant, or at the very least, any variation in these properties should occur in a predictable and measurable way. Therefore, properties selected for sediment source fingerprinting should remain constant through sediment detachment, transportation and deposition processes, or vary in a predictable and measurable way. One approach to select conservative properties for sediment source fingerprinting is to identify targeted tracers, such as caesium-137, that provide specific source information (e.g. surface versus subsurface origins). A second approach is to use statistical tests to select an optimal suite of conservative properties capable of modelling sediment provenance. In general, statistical approaches use a combination of a discrimination (e.g. Kruskal Wallis H-test, Mann-Whitney U-test) and parameter selection statistics (e.g. Discriminant Function Analysis or Principle Component Analysis). The challenge is that modelling sediment provenance is often not straightforward and there is increasing debate in the literature surrounding the most appropriate approach to selecting elements for modelling. Moving forward, it would be beneficial if researchers test their results with multiple modelling approaches, artificial mixtures, and multiple lines of evidence to provide secondary support to their initial modelling results. Indeed, element selection can greatly impact modelling results and having multiple lines of evidence will help provide confidence when modelling sediment provenance.

  20. Introducing an osteopathic approach into neonatology ward: the NE-O model.

    PubMed

    Cerritelli, Francesco; Martelli, Marta; Renzetti, Cinzia; Pizzolorusso, Gianfranco; Cozzolino, Vincenzo; Barlafante, Gina

    2014-01-01

    Several studies showed the effect of osteopathic manipulative treatment on neonatal care in reducing length of stay in hospital, gastrointestinal problems, clubfoot complications and improving cranial asymmetry of infants affected by plagiocephaly. Despite several results obtained, there is still a lack of standardized osteopathic evaluation and treatment procedures for newborns recovered in neonatal intensive care unit (NICU). The aim of this paper is to suggest a protocol on osteopathic approach (NE-O model) in treating hospitalized newborns. The NE-O model is composed by specific evaluation tests and treatments to tailor osteopathic method according to preterm and term infants' needs, NICU environment, medical and paramedical assistance. This model was developed to maximize the effectiveness and the clinical use of osteopathy into NICU. The NE-O model was adopted in 2006 to evaluate the efficacy of OMT in neonatology. Results from research showed the effectiveness of this osteopathic model in reducing preterms' length of stay and hospital costs. Additionally the present model was demonstrated to be safe. The present paper defines the key steps for a rigorous and effective osteopathic approach into NICU setting, providing a scientific and methodological example of integrated medicine and complex intervention.

  1. Introducing an osteopathic approach into neonatology ward: the NE-O model

    PubMed Central

    2014-01-01

    Background Several studies showed the effect of osteopathic manipulative treatment on neonatal care in reducing length of stay in hospital, gastrointestinal problems, clubfoot complications and improving cranial asymmetry of infants affected by plagiocephaly. Despite several results obtained, there is still a lack of standardized osteopathic evaluation and treatment procedures for newborns recovered in neonatal intensive care unit (NICU). The aim of this paper is to suggest a protocol on osteopathic approach (NE-O model) in treating hospitalized newborns. Methods The NE-O model is composed by specific evaluation tests and treatments to tailor osteopathic method according to preterm and term infants’ needs, NICU environment, medical and paramedical assistance. This model was developed to maximize the effectiveness and the clinical use of osteopathy into NICU. Results The NE-O model was adopted in 2006 to evaluate the efficacy of OMT in neonatology. Results from research showed the effectiveness of this osteopathic model in reducing preterms’ length of stay and hospital costs. Additionally the present model was demonstrated to be safe. Conclusion The present paper defines the key steps for a rigorous and effective osteopathic approach into NICU setting, providing a scientific and methodological example of integrated medicine and complex intervention. PMID:24904746

  2. Longitudinal analysis of the strengths and difficulties questionnaire scores of the Millennium Cohort Study children in England using M-quantile random-effects regression.

    PubMed

    Tzavidis, Nikos; Salvati, Nicola; Schmid, Timo; Flouri, Eirini; Midouhas, Emily

    2016-02-01

    Multilevel modelling is a popular approach for longitudinal data analysis. Statistical models conventionally target a parameter at the centre of a distribution. However, when the distribution of the data is asymmetric, modelling other location parameters, e.g. percentiles, may be more informative. We present a new approach, M -quantile random-effects regression, for modelling multilevel data. The proposed method is used for modelling location parameters of the distribution of the strengths and difficulties questionnaire scores of children in England who participate in the Millennium Cohort Study. Quantile mixed models are also considered. The analyses offer insights to child psychologists about the differential effects of risk factors on children's outcomes.

  3. The Be-WetSpa-Pest modeling approach to simulate human and environmental exposure from pesticide application

    NASA Astrophysics Data System (ADS)

    Binder, Claudia; Garcia-Santos, Glenda; Andreoli, Romano; Diaz, Jaime; Feola, Giuseppe; Wittensoeldner, Moritz; Yang, Jing

    2016-04-01

    This study presents an integrative and spatially explicit modeling approach for analyzing human and environmental exposure from pesticide application of smallholders in the potato producing Andean region in Colombia. The modeling approach fulfills the following criteria: (i) it includes environmental and human compartments; (ii) it contains a behavioral decision-making model for estimating the effect of policies on pesticide flows to humans and the environment; (iii) it is spatially explicit; and (iv) it is modular and easily expandable to include additional modules, crops or technologies. The model was calibrated and validated for the Vereda La Hoya and was used to explore the effect of different policy measures in the region. The model has moderate data requirements and can be adapted relatively easy to other regions in developing countries with similar conditions.

  4. Time series sightability modeling of animal populations.

    PubMed

    ArchMiller, Althea A; Dorazio, Robert M; St Clair, Katherine; Fieberg, John R

    2018-01-01

    Logistic regression models-or "sightability models"-fit to detection/non-detection data from marked individuals are often used to adjust for visibility bias in later detection-only surveys, with population abundance estimated using a modified Horvitz-Thompson (mHT) estimator. More recently, a model-based alternative for analyzing combined detection/non-detection and detection-only data was developed. This approach seemed promising, since it resulted in similar estimates as the mHT when applied to data from moose (Alces alces) surveys in Minnesota. More importantly, it provided a framework for developing flexible models for analyzing multiyear detection-only survey data in combination with detection/non-detection data. During initial attempts to extend the model-based approach to multiple years of detection-only data, we found that estimates of detection probabilities and population abundance were sensitive to the amount of detection-only data included in the combined (detection/non-detection and detection-only) analysis. Subsequently, we developed a robust hierarchical modeling approach where sightability model parameters are informed only by the detection/non-detection data, and we used this approach to fit a fixed-effects model (FE model) with year-specific parameters and a temporally-smoothed model (TS model) that shares information across years via random effects and a temporal spline. The abundance estimates from the TS model were more precise, with decreased interannual variability relative to the FE model and mHT abundance estimates, illustrating the potential benefits from model-based approaches that allow information to be shared across years.

  5. Semiparametric time varying coefficient model for matched case-crossover studies.

    PubMed

    Ortega-Villa, Ana Maria; Kim, Inyoung; Kim, H

    2017-03-15

    In matched case-crossover studies, it is generally accepted that the covariates on which a case and associated controls are matched cannot exert a confounding effect on independent predictors included in the conditional logistic regression model. This is because any stratum effect is removed by the conditioning on the fixed number of sets of the case and controls in the stratum. Hence, the conditional logistic regression model is not able to detect any effects associated with the matching covariates by stratum. However, some matching covariates such as time often play an important role as an effect modification leading to incorrect statistical estimation and prediction. Therefore, we propose three approaches to evaluate effect modification by time. The first is a parametric approach, the second is a semiparametric penalized approach, and the third is a semiparametric Bayesian approach. Our parametric approach is a two-stage method, which uses conditional logistic regression in the first stage and then estimates polynomial regression in the second stage. Our semiparametric penalized and Bayesian approaches are one-stage approaches developed by using regression splines. Our semiparametric one stage approach allows us to not only detect the parametric relationship between the predictor and binary outcomes, but also evaluate nonparametric relationships between the predictor and time. We demonstrate the advantage of our semiparametric one-stage approaches using both a simulation study and an epidemiological example of a 1-4 bi-directional case-crossover study of childhood aseptic meningitis with drinking water turbidity. We also provide statistical inference for the semiparametric Bayesian approach using Bayes Factors. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  6. Fun with maths: exploring implications of mathematical models for malaria eradication.

    PubMed

    Eckhoff, Philip A; Bever, Caitlin A; Gerardin, Jaline; Wenger, Edward A

    2014-12-11

    Mathematical analyses and modelling have an important role informing malaria eradication strategies. Simple mathematical approaches can answer many questions, but it is important to investigate their assumptions and to test whether simple assumptions affect the results. In this note, four examples demonstrate both the effects of model structures and assumptions and also the benefits of using a diversity of model approaches. These examples include the time to eradication, the impact of vaccine efficacy and coverage, drug programs and the effects of duration of infections and delays to treatment, and the influence of seasonality and migration coupling on disease fadeout. An excessively simple structure can miss key results, but simple mathematical approaches can still achieve key results for eradication strategy and define areas for investigation by more complex models.

  7. Evaluation of uncertainty in the adjustment of fundamental constants

    NASA Astrophysics Data System (ADS)

    Bodnar, Olha; Elster, Clemens; Fischer, Joachim; Possolo, Antonio; Toman, Blaza

    2016-02-01

    Combining multiple measurement results for the same quantity is an important task in metrology and in many other areas. Examples include the determination of fundamental constants, the calculation of reference values in interlaboratory comparisons, or the meta-analysis of clinical studies. However, neither the GUM nor its supplements give any guidance for this task. Various approaches are applied such as weighted least-squares in conjunction with the Birge ratio or random effects models. While the former approach, which is based on a location-scale model, is particularly popular in metrology, the latter represents a standard tool used in statistics for meta-analysis. We investigate the reliability and robustness of the location-scale model and the random effects model with particular focus on resulting coverage or credible intervals. The interval estimates are obtained by adopting a Bayesian point of view in conjunction with a non-informative prior that is determined by a currently favored principle for selecting non-informative priors. Both approaches are compared by applying them to simulated data as well as to data for the Planck constant and the Newtonian constant of gravitation. Our results suggest that the proposed Bayesian inference based on the random effects model is more reliable and less sensitive to model misspecifications than the approach based on the location-scale model.

  8. Validation of Western North America Models based on finite-frequency and ray theory imaging methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larmat, Carene; Maceira, Monica; Porritt, Robert W.

    2015-02-02

    We validate seismic models developed for western North America with a focus on effect of imaging methods on data fit. We use the DNA09 models for which our collaborators provide models built with both the body-­wave FF approach and the RT approach, when the data selection, processing and reference models are the same.

  9. Lightweight approach to model traceability in a CASE tool

    NASA Astrophysics Data System (ADS)

    Vileiniskis, Tomas; Skersys, Tomas; Pavalkis, Saulius; Butleris, Rimantas; Butkiene, Rita

    2017-07-01

    A term "model-driven" is not at all a new buzzword within the ranks of system development community. Nevertheless, the ever increasing complexity of model-driven approaches keeps fueling all kinds of discussions around this paradigm and pushes researchers forward to research and develop new and more effective ways to system development. With the increasing complexity, model traceability, and model management as a whole, becomes indispensable activities of model-driven system development process. The main goal of this paper is to present a conceptual design and implementation of a practical lightweight approach to model traceability in a CASE tool.

  10. Microarray-based cancer prediction using soft computing approach.

    PubMed

    Wang, Xiaosheng; Gotoh, Osamu

    2009-05-26

    One of the difficulties in using gene expression profiles to predict cancer is how to effectively select a few informative genes to construct accurate prediction models from thousands or ten thousands of genes. We screen highly discriminative genes and gene pairs to create simple prediction models involved in single genes or gene pairs on the basis of soft computing approach and rough set theory. Accurate cancerous prediction is obtained when we apply the simple prediction models for four cancerous gene expression datasets: CNS tumor, colon tumor, lung cancer and DLBCL. Some genes closely correlated with the pathogenesis of specific or general cancers are identified. In contrast with other models, our models are simple, effective and robust. Meanwhile, our models are interpretable for they are based on decision rules. Our results demonstrate that very simple models may perform well on cancerous molecular prediction and important gene markers of cancer can be detected if the gene selection approach is chosen reasonably.

  11. Oncology Modeling for Fun and Profit! Key Steps for Busy Analysts in Health Technology Assessment.

    PubMed

    Beca, Jaclyn; Husereau, Don; Chan, Kelvin K W; Hawkins, Neil; Hoch, Jeffrey S

    2018-01-01

    In evaluating new oncology medicines, two common modeling approaches are state transition (e.g., Markov and semi-Markov) and partitioned survival. Partitioned survival models have become more prominent in oncology health technology assessment processes in recent years. Our experience in conducting and evaluating models for economic evaluation has highlighted many important and practical pitfalls. As there is little guidance available on best practices for those who wish to conduct them, we provide guidance in the form of 'Key steps for busy analysts,' who may have very little time and require highly favorable results. Our guidance highlights the continued need for rigorous conduct and transparent reporting of economic evaluations regardless of the modeling approach taken, and the importance of modeling that better reflects reality, which includes better approaches to considering plausibility, estimating relative treatment effects, dealing with post-progression effects, and appropriate characterization of the uncertainty from modeling itself.

  12. Experimental and modelling studies for the validation of the mechanistic basis of the Local Effect Model

    NASA Astrophysics Data System (ADS)

    Tommasino, F.

    2016-03-01

    This review will summarize results obtained in the recent years applying the Local Effect Model (LEM) approach to the study of basic radiobiological aspects, as for instance DNA damage induction and repair, and charged particle track structure. The promising results obtained using different experimental techniques and looking at different biological end points, support the relevance of the LEM approach for the description of radiation effects induced by both low- and high-LET radiation. Furthermore, they suggest that nowadays the appropriate combination of experimental and modelling tools can lead to advances in the understanding of several open issues in the field of radiation biology.

  13. Coupled wake boundary layer model of windfarms

    NASA Astrophysics Data System (ADS)

    Stevens, Richard; Gayme, Dennice; Meneveau, Charles

    2014-11-01

    We present a coupled wake boundary layer (CWBL) model that describes the distribution of the power output in a windfarm. The model couples the traditional, industry-standard wake expansion/superposition approach with a top-down model for the overall windfarm boundary layer structure. Wake models capture the effect of turbine positioning, while the top-down approach represents the interaction between the windturbine wakes and the atmospheric boundary layer. Each portion of the CWBL model requires specification of a parameter that is unknown a-priori. The wake model requires the wake expansion rate, whereas the top-down model requires the effective spanwise turbine spacing within which the model's momentum balance is relevant. The wake expansion rate is obtained by matching the mean velocity at the turbine from both approaches, while the effective spanwise turbine spacing is determined from the wake model. Coupling of the constitutive components of the CWBL model is achieved by iterating these parameters until convergence is reached. We show that the CWBL model predictions compare more favorably with large eddy simulation results than those made with either the wake or top-down model in isolation and that the model can be applied successfully to the Horns Rev and Nysted windfarms. The `Fellowships for Young Energy Scientists' (YES!) of the Foundation for Fundamental Research on Matter supported by NWO, and NSF Grant #1243482.

  14. A 2 × 2 taxonomy of multilevel latent contextual models: accuracy-bias trade-offs in full and partial error correction models.

    PubMed

    Lüdtke, Oliver; Marsh, Herbert W; Robitzsch, Alexander; Trautwein, Ulrich

    2011-12-01

    In multilevel modeling, group-level variables (L2) for assessing contextual effects are frequently generated by aggregating variables from a lower level (L1). A major problem of contextual analyses in the social sciences is that there is no error-free measurement of constructs. In the present article, 2 types of error occurring in multilevel data when estimating contextual effects are distinguished: unreliability that is due to measurement error and unreliability that is due to sampling error. The fact that studies may or may not correct for these 2 types of error can be translated into a 2 × 2 taxonomy of multilevel latent contextual models comprising 4 approaches: an uncorrected approach, partial correction approaches correcting for either measurement or sampling error (but not both), and a full correction approach that adjusts for both sources of error. It is shown mathematically and with simulated data that the uncorrected and partial correction approaches can result in substantially biased estimates of contextual effects, depending on the number of L1 individuals per group, the number of groups, the intraclass correlation, the number of indicators, and the size of the factor loadings. However, the simulation study also shows that partial correction approaches can outperform full correction approaches when the data provide only limited information in terms of the L2 construct (i.e., small number of groups, low intraclass correlation). A real-data application from educational psychology is used to illustrate the different approaches.

  15. Benchmarking Inverse Statistical Approaches for Protein Structure and Design with Exactly Solvable Models.

    PubMed

    Jacquin, Hugo; Gilson, Amy; Shakhnovich, Eugene; Cocco, Simona; Monasson, Rémi

    2016-05-01

    Inverse statistical approaches to determine protein structure and function from Multiple Sequence Alignments (MSA) are emerging as powerful tools in computational biology. However the underlying assumptions of the relationship between the inferred effective Potts Hamiltonian and real protein structure and energetics remain untested so far. Here we use lattice protein model (LP) to benchmark those inverse statistical approaches. We build MSA of highly stable sequences in target LP structures, and infer the effective pairwise Potts Hamiltonians from those MSA. We find that inferred Potts Hamiltonians reproduce many important aspects of 'true' LP structures and energetics. Careful analysis reveals that effective pairwise couplings in inferred Potts Hamiltonians depend not only on the energetics of the native structure but also on competing folds; in particular, the coupling values reflect both positive design (stabilization of native conformation) and negative design (destabilization of competing folds). In addition to providing detailed structural information, the inferred Potts models used as protein Hamiltonian for design of new sequences are able to generate with high probability completely new sequences with the desired folds, which is not possible using independent-site models. Those are remarkable results as the effective LP Hamiltonians used to generate MSA are not simple pairwise models due to the competition between the folds. Our findings elucidate the reasons for the success of inverse approaches to the modelling of proteins from sequence data, and their limitations.

  16. Effect of Bayesian Student Modeling on Academic Achievement in Foreign Language Teaching (University Level English Preparatory School Example)

    ERIC Educational Resources Information Center

    Aslan, Burak Galip; Öztürk, Özlem; Inceoglu, Mustafa Murat

    2014-01-01

    Considering the increasing importance of adaptive approaches in CALL systems, this study implemented a machine learning based student modeling middleware with Bayesian networks. The profiling approach of the student modeling system is based on Felder and Silverman's Learning Styles Model and Felder and Soloman's Index of Learning Styles…

  17. An effective model for ergonomic optimization applied to a new automotive assembly line

    NASA Astrophysics Data System (ADS)

    Duraccio, Vincenzo; Elia, Valerio; Forcina, Antonio

    2016-06-01

    An efficient ergonomic optimization can lead to a significant improvement in production performance and a considerable reduction of costs. In the present paper new model for ergonomic optimization is proposed. The new approach is based on the criteria defined by National Institute of Occupational Safety and Health and, adapted to Italian legislation. The proposed model provides an ergonomic optimization, by analyzing ergonomic relations between manual work in correct conditions. The model includes a schematic and systematic analysis method of the operations, and identifies all possible ergonomic aspects to be evaluated. The proposed approach has been applied to an automotive assembly line, where the operation repeatability makes the optimization fundamental. The proposed application clearly demonstrates the effectiveness of the new approach.

  18. AN APPROACH TO THE DEVELOPMENT OF MODELS TO QUANTITATIVELY ASSESS THE EFFECTS OF EXPOSURE TO ENVIRONMENTALLY RELEVANT LEVELS OF ENDOCRINE DISRUPTORS

    EPA Science Inventory

    An approach to the development of quantitative models to assess the effects of exposure to environmentally relevant levels of endocrine disruptors on homeostasis in adults.

    Ben-Jonathan N, Cooper RL, Foster P, Hughes CL, Hoyer PB, Klotz D, Kohn M, Lamb DJ, Stancel GM.
    <...

  19. Not Quite Normal: Consequences of Violating the Assumption of Normality in Regression Mixture Models

    ERIC Educational Resources Information Center

    Van Horn, M. Lee; Smith, Jessalyn; Fagan, Abigail A.; Jaki, Thomas; Feaster, Daniel J.; Masyn, Katherine; Hawkins, J. David; Howe, George

    2012-01-01

    Regression mixture models, which have only recently begun to be used in applied research, are a new approach for finding differential effects. This approach comes at the cost of the assumption that error terms are normally distributed within classes. This study uses Monte Carlo simulations to explore the effects of relatively minor violations of…

  20. Causal Latent Markov Model for the Comparison of Multiple Treatments in Observational Longitudinal Studies

    ERIC Educational Resources Information Center

    Bartolucci, Francesco; Pennoni, Fulvia; Vittadini, Giorgio

    2016-01-01

    We extend to the longitudinal setting a latent class approach that was recently introduced by Lanza, Coffman, and Xu to estimate the causal effect of a treatment. The proposed approach enables an evaluation of multiple treatment effects on subpopulations of individuals from a dynamic perspective, as it relies on a latent Markov (LM) model that is…

  1. Advanced Nonlinear Latent Variable Modeling: Distribution Analytic LMS and QML Estimators of Interaction and Quadratic Effects

    ERIC Educational Resources Information Center

    Kelava, Augustin; Werner, Christina S.; Schermelleh-Engel, Karin; Moosbrugger, Helfried; Zapf, Dieter; Ma, Yue; Cham, Heining; Aiken, Leona S.; West, Stephen G.

    2011-01-01

    Interaction and quadratic effects in latent variable models have to date only rarely been tested in practice. Traditional product indicator approaches need to create product indicators (e.g., x[superscript 2] [subscript 1], x[subscript 1]x[subscript 4]) to serve as indicators of each nonlinear latent construct. These approaches require the use of…

  2. Using Multisite Experiments to Study Cross-Site Variation in Treatment Effects: A Hybrid Approach with Fixed Intercepts and A Random Treatment Coefficient

    ERIC Educational Resources Information Center

    Bloom, Howard S.; Raudenbush, Stephen W.; Weiss, Michael J.; Porter, Kristin

    2017-01-01

    The present article considers a fundamental question in evaluation research: "By how much do program effects vary across sites?" The article first presents a theoretical model of cross-site impact variation and a related estimation model with a random treatment coefficient and fixed site-specific intercepts. This approach eliminates…

  3. A theoretical model for investigating the effect of vacuum fluctuations on the electromechanical stability of nanotweezers

    NASA Astrophysics Data System (ADS)

    Farrokhabadi, A.; Mokhtari, J.; Koochi, A.; Abadyan, M.

    2015-06-01

    In this paper, the impact of the Casimir attraction on the electromechanical stability of nanowire-fabricated nanotweezers is investigated using a theoretical continuum mechanics model. The Dirichlet mode is considered and an asymptotic solution, based on path integral approach, is applied to consider the effect of vacuum fluctuations in the model. The Euler-Bernoulli beam theory is employed to derive the nonlinear governing equation of the nanotweezers. The governing equations are solved by three different approaches, i.e. the modified variation iteration method, generalized differential quadrature method and using a lumped parameter model. Various perspectives of the problem, including the comparison with the van der Waals force regime, the variation of instability parameters and effects of geometry are addressed in present paper. The proposed approach is beneficial for the precise determination of the electrostatic response of the nanotweezers in the presence of Casimir force.

  4. An Analytical Approach to Salary Evaluation for Educational Personnel

    ERIC Educational Resources Information Center

    Bruno, James Edward

    1969-01-01

    "In this study a linear programming model for determining an 'optimal' salary schedule was derived then applied to an educational salary structure. The validity of the model and the effectiveness of the approach were established. (Author)

  5. Time series sightability modeling of animal populations

    USGS Publications Warehouse

    ArchMiller, Althea A.; Dorazio, Robert; St. Clair, Katherine; Fieberg, John R.

    2018-01-01

    Logistic regression models—or “sightability models”—fit to detection/non-detection data from marked individuals are often used to adjust for visibility bias in later detection-only surveys, with population abundance estimated using a modified Horvitz-Thompson (mHT) estimator. More recently, a model-based alternative for analyzing combined detection/non-detection and detection-only data was developed. This approach seemed promising, since it resulted in similar estimates as the mHT when applied to data from moose (Alces alces) surveys in Minnesota. More importantly, it provided a framework for developing flexible models for analyzing multiyear detection-only survey data in combination with detection/non-detection data. During initial attempts to extend the model-based approach to multiple years of detection-only data, we found that estimates of detection probabilities and population abundance were sensitive to the amount of detection-only data included in the combined (detection/non-detection and detection-only) analysis. Subsequently, we developed a robust hierarchical modeling approach where sightability model parameters are informed only by the detection/non-detection data, and we used this approach to fit a fixed-effects model (FE model) with year-specific parameters and a temporally-smoothed model (TS model) that shares information across years via random effects and a temporal spline. The abundance estimates from the TS model were more precise, with decreased interannual variability relative to the FE model and mHT abundance estimates, illustrating the potential benefits from model-based approaches that allow information to be shared across years.

  6. Include dispersion in quantum chemical modeling of enzymatic reactions: the case of isoaspartyl dipeptidase.

    PubMed

    Zhang, Hai-Mei; Chen, Shi-Lu

    2015-06-09

    The lack of dispersion in the B3LYP functional has been proposed to be the main origin of big errors in quantum chemical modeling of a few enzymes and transition metal complexes. In this work, the essential dispersion effects that affect quantum chemical modeling are investigated. With binuclear zinc isoaspartyl dipeptidase (IAD) as an example, dispersion is included in the modeling of enzymatic reactions by two different procedures, i.e., (i) geometry optimizations followed by single-point calculations of dispersion (approach I) and (ii) the inclusion of dispersion throughout geometry optimization and energy evaluation (approach II). Based on a 169-atom chemical model, the calculations show a qualitative consistency between approaches I and II in energetics and most key geometries, demonstrating that both approaches are available with the latter preferential since both geometry and energy are dispersion-corrected in approach II. When a smaller model without Arg233 (147 atoms) was used, an inconsistency was observed, indicating that the missing dispersion interactions are essentially responsible for determining equilibrium geometries. Other technical issues and mechanistic characteristics of IAD are also discussed, in particular with respect to the effects of Arg233.

  7. Dynamic ground-effect measurements on the F-15 STOL and Maneuver Technology Demonstrator (S/MTD) configuration

    NASA Technical Reports Server (NTRS)

    Kemmerly, Guy T.

    1990-01-01

    A moving-model ground-effect testing method was used to study the influence of rate-of-descent on the aerodynamic characteristics for the F-15 STOL and Maneuver Technology Demonstrator (S/MTD) configuration for both the approach and roll-out phases of landing. The approach phase was modeled for three rates of descent, and the results were compared to the predictions from the F-15 S/MTD simulation data base (prediction based on data obtained in a wind tunnel with zero rate of descent). This comparison showed significant differences due both to the rate of descent in the moving-model test and to the presence of the ground boundary layer in the wind tunnel test. Relative to the simulation data base predictions, the moving-model test showed substantially less lift increase in ground effect, less nose-down pitching moment, and less increase in drag. These differences became more prominent at the larger thrust vector angles. Over the small range of rates of descent tested using the moving-model technique, the effect of rate of descent on longitudinal aerodynamics was relatively constant. The results of this investigation indicate no safety-of-flight problems with the lower jets vectored up to 80 deg on approach. The results also indicate that this configuration could employ a nozzle concept using lower reverser vector angles up to 110 deg on approach if a no-flare approach procedure were adopted and if inlet reingestion does not pose a problem.

  8. Applying Emax model and bivariate thin plate splines to assess drug interactions

    PubMed Central

    Kong, Maiying; Lee, J. Jack

    2014-01-01

    We review the semiparametric approach previously proposed by Kong and Lee and extend it to a case in which the dose-effect curves follow the Emax model instead of the median effect equation. When the maximum effects for the investigated drugs are different, we provide a procedure to obtain the additive effect based on the Loewe additivity model. Then, we apply a bivariate thin plate spline approach to estimate the effect beyond additivity along with its 95% point-wise confidence interval as well as its 95% simultaneous confidence interval for any combination dose. Thus, synergy, additivity, and antagonism can be identified. The advantages of the method are that it provides an overall assessment of the combination effect on the entire two-dimensional dose space spanned by the experimental doses, and it enables us to identify complex patterns of drug interaction in combination studies. In addition, this approach is robust to outliers. To illustrate this procedure, we analyzed data from two case studies. PMID:20036878

  9. Applying Emax model and bivariate thin plate splines to assess drug interactions.

    PubMed

    Kong, Maiying; Lee, J Jack

    2010-01-01

    We review the semiparametric approach previously proposed by Kong and Lee and extend it to a case in which the dose-effect curves follow the Emax model instead of the median effect equation. When the maximum effects for the investigated drugs are different, we provide a procedure to obtain the additive effect based on the Loewe additivity model. Then, we apply a bivariate thin plate spline approach to estimate the effect beyond additivity along with its 95 per cent point-wise confidence interval as well as its 95 per cent simultaneous confidence interval for any combination dose. Thus, synergy, additivity, and antagonism can be identified. The advantages of the method are that it provides an overall assessment of the combination effect on the entire two-dimensional dose space spanned by the experimental doses, and it enables us to identify complex patterns of drug interaction in combination studies. In addition, this approach is robust to outliers. To illustrate this procedure, we analyzed data from two case studies.

  10. Impact of correlation of predictors on discrimination of risk models in development and external populations.

    PubMed

    Kundu, Suman; Mazumdar, Madhu; Ferket, Bart

    2017-04-19

    The area under the ROC curve (AUC) of risk models is known to be influenced by differences in case-mix and effect size of predictors. The impact of heterogeneity in correlation among predictors has however been under investigated. We sought to evaluate how correlation among predictors affects the AUC in development and external populations. We simulated hypothetical populations using two different methods based on means, standard deviations, and correlation of two continuous predictors. In the first approach, the distribution and correlation of predictors were assumed for the total population. In the second approach, these parameters were modeled conditional on disease status. In both approaches, multivariable logistic regression models were fitted to predict disease risk in individuals. Each risk model developed in a population was validated in the remaining populations to investigate external validity. For both approaches, we observed that the magnitude of the AUC in the development and external populations depends on the correlation among predictors. Lower AUCs were estimated in scenarios of both strong positive and negative correlation, depending on the direction of predictor effects and the simulation method. However, when adjusted effect sizes of predictors were specified in the opposite directions, increasingly negative correlation consistently improved the AUC. AUCs in external validation populations were higher or lower than in the derivation cohort, even in the presence of similar predictor effects. Discrimination of risk prediction models should be assessed in various external populations with different correlation structures to make better inferences about model generalizability.

  11. An Alternative Approach to the Extended Drude Model

    NASA Astrophysics Data System (ADS)

    Gantzler, N. J.; Dordevic, S. V.

    2018-05-01

    The original Drude model, proposed over a hundred years ago, is still used today for the analysis of optical properties of solids. Within this model, both the plasma frequency and quasiparticle scattering rate are constant, which makes the model rather inflexible. In order to circumvent this problem, the so-called extended Drude model was proposed, which allowed for the frequency dependence of both the quasiparticle scattering rate and the effective mass. In this work we will explore an alternative approach to the extended Drude model. Here, one also assumes that the quasiparticle scattering rate is frequency dependent; however, instead of the effective mass, the plasma frequency becomes frequency-dependent. This alternative model is applied to the high Tc superconductor Bi2Sr2CaCu2O8+δ (Bi2212) with Tc = 92 K, and the results are compared and contrasted with the ones obtained from the conventional extended Drude model. The results point to several advantages of this alternative approach to the extended Drude model.

  12. A Two-Stage Estimation Method for Random Coefficient Differential Equation Models with Application to Longitudinal HIV Dynamic Data.

    PubMed

    Fang, Yun; Wu, Hulin; Zhu, Li-Xing

    2011-07-01

    We propose a two-stage estimation method for random coefficient ordinary differential equation (ODE) models. A maximum pseudo-likelihood estimator (MPLE) is derived based on a mixed-effects modeling approach and its asymptotic properties for population parameters are established. The proposed method does not require repeatedly solving ODEs, and is computationally efficient although it does pay a price with the loss of some estimation efficiency. However, the method does offer an alternative approach when the exact likelihood approach fails due to model complexity and high-dimensional parameter space, and it can also serve as a method to obtain the starting estimates for more accurate estimation methods. In addition, the proposed method does not need to specify the initial values of state variables and preserves all the advantages of the mixed-effects modeling approach. The finite sample properties of the proposed estimator are studied via Monte Carlo simulations and the methodology is also illustrated with application to an AIDS clinical data set.

  13. Seeking for the rational basis of the median model: the optimal combination of multi-model ensemble results

    NASA Astrophysics Data System (ADS)

    Riccio, A.; Giunta, G.; Galmarini, S.

    2007-04-01

    In this paper we present an approach for the statistical analysis of multi-model ensemble results. The models considered here are operational long-range transport and dispersion models, also used for the real-time simulation of pollutant dispersion or the accidental release of radioactive nuclides. We first introduce the theoretical basis (with its roots sinking into the Bayes theorem) and then apply this approach to the analysis of model results obtained during the ETEX-1 exercise. We recover some interesting results, supporting the heuristic approach called "median model", originally introduced in Galmarini et al. (2004a, b). This approach also provides a way to systematically reduce (and quantify) model uncertainties, thus supporting the decision-making process and/or regulatory-purpose activities in a very effective manner.

  14. Seeking for the rational basis of the Median Model: the optimal combination of multi-model ensemble results

    NASA Astrophysics Data System (ADS)

    Riccio, A.; Giunta, G.; Galmarini, S.

    2007-12-01

    In this paper we present an approach for the statistical analysis of multi-model ensemble results. The models considered here are operational long-range transport and dispersion models, also used for the real-time simulation of pollutant dispersion or the accidental release of radioactive nuclides. We first introduce the theoretical basis (with its roots sinking into the Bayes theorem) and then apply this approach to the analysis of model results obtained during the ETEX-1 exercise. We recover some interesting results, supporting the heuristic approach called "median model", originally introduced in Galmarini et al. (2004a, b). This approach also provides a way to systematically reduce (and quantify) model uncertainties, thus supporting the decision-making process and/or regulatory-purpose activities in a very effective manner.

  15. Sensitivity Analysis in Sequential Decision Models.

    PubMed

    Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

    2017-02-01

    Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

  16. Breast tumor malignancy modelling using evolutionary neural logic networks.

    PubMed

    Tsakonas, Athanasios; Dounias, Georgios; Panagi, Georgia; Panourgias, Evangelia

    2006-01-01

    The present work proposes a computer assisted methodology for the effective modelling of the diagnostic decision for breast tumor malignancy. The suggested approach is based on innovative hybrid computational intelligence algorithms properly applied in related cytological data contained in past medical records. The experimental data used in this study were gathered in the early 1990s in the University of Wisconsin, based in post diagnostic cytological observations performed by expert medical staff. Data were properly encoded in a computer database and accordingly, various alternative modelling techniques were applied on them, in an attempt to form diagnostic models. Previous methods included standard optimisation techniques, as well as artificial intelligence approaches, in a way that a variety of related publications exists in modern literature on the subject. In this report, a hybrid computational intelligence approach is suggested, which effectively combines modern mathematical logic principles, neural computation and genetic programming in an effective manner. The approach proves promising either in terms of diagnostic accuracy and generalization capabilities, or in terms of comprehensibility and practical importance for the related medical staff.

  17. A classification procedure for the effective management of changes during the maintenance process

    NASA Technical Reports Server (NTRS)

    Briand, Lionel C.; Basili, Victor R.

    1992-01-01

    During software operation, maintainers are often faced with numerous change requests. Given available resources such as effort and calendar time, changes, if approved, have to be planned to fit within budget and schedule constraints. In this paper, we address the issue of assessing the difficulty of a change based on known or predictable data. This paper should be considered as a first step towards the construction of customized economic models for maintainers. In it, we propose a modeling approach, based on regular statistical techniques, that can be used in a variety of software maintenance environments. The approach can be easily automated, and is simple for people with limited statistical experience to use. Moreover, it deals effectively with the uncertainty usually associated with both model inputs and outputs. The modeling approach is validated on a data set provided by NASA/GSFC which shows it was effective in classifying changes with respect to the effort involved in implementing them. Other advantages of the approach are discussed along with additional steps to improve the results.

  18. Integrative Approaches for Predicting in vivo Effects of Chemicals from their Structural Descriptors and the Results of Short-term Biological Assays

    PubMed Central

    Low, Yen S.; Sedykh, Alexander; Rusyn, Ivan; Tropsha, Alexander

    2017-01-01

    Cheminformatics approaches such as Quantitative Structure Activity Relationship (QSAR) modeling have been used traditionally for predicting chemical toxicity. In recent years, high throughput biological assays have been increasingly employed to elucidate mechanisms of chemical toxicity and predict toxic effects of chemicals in vivo. The data generated in such assays can be considered as biological descriptors of chemicals that can be combined with molecular descriptors and employed in QSAR modeling to improve the accuracy of toxicity prediction. In this review, we discuss several approaches for integrating chemical and biological data for predicting biological effects of chemicals in vivo and compare their performance across several data sets. We conclude that while no method consistently shows superior performance, the integrative approaches rank consistently among the best yet offer enriched interpretation of models over those built with either chemical or biological data alone. We discuss the outlook for such interdisciplinary methods and offer recommendations to further improve the accuracy and interpretability of computational models that predict chemical toxicity. PMID:24805064

  19. Profile of Students’ Mental Model Change on Law Concepts Archimedes as Impact of Multi-Representation Approach

    NASA Astrophysics Data System (ADS)

    Taher, M.; Hamidah, I.; Suwarma, I. R.

    2017-09-01

    This paper outlined the results of an experimental study on the effects of multi-representation approach in learning Archimedes Law on students’ mental model improvement. The multi-representation techniques implemented in the study were verbal, pictorial, mathematical, and graphical representations. Students’ mental model was classified into three levels, i.e. scientific, synthetic, and initial levels, based on the students’ level of understanding. The present study employed the pre-experimental methodology, using one group pretest-posttest design. The subject of the study was 32 eleventh grade students in a Public Senior High School in Riau Province. The research instrument included model mental test on hydrostatic pressure concept, in the form of essay test judged by experts. The findings showed that there was positive change in students’ mental model, indicating that multi-representation approach was effective to improve students’ mental model.

  20. Multivariate longitudinal data analysis with mixed effects hidden Markov models.

    PubMed

    Raffa, Jesse D; Dubin, Joel A

    2015-09-01

    Multiple longitudinal responses are often collected as a means to capture relevant features of the true outcome of interest, which is often hidden and not directly measurable. We outline an approach which models these multivariate longitudinal responses as generated from a hidden disease process. We propose a class of models which uses a hidden Markov model with separate but correlated random effects between multiple longitudinal responses. This approach was motivated by a smoking cessation clinical trial, where a bivariate longitudinal response involving both a continuous and a binomial response was collected for each participant to monitor smoking behavior. A Bayesian method using Markov chain Monte Carlo is used. Comparison of separate univariate response models to the bivariate response models was undertaken. Our methods are demonstrated on the smoking cessation clinical trial dataset, and properties of our approach are examined through extensive simulation studies. © 2015, The International Biometric Society.

  1. Modelling & Simulation Support to the Effects Based Approach to Operations - Observations from Using GAMMA in MNE 4

    DTIC Science & Technology

    2006-09-01

    The aim of the two parts of the experiment was identical: To explore concepts and supporting tools for Effects Based Approach to Operations (EBAO...feedback on the PMESII factors over time and the degree of achievement of the Operational Endstate. Modelling & Simulation Support to the Effects ...specific situation depends also on his interests. GAMMA provides two different methods: 1. The importance for different PMESII factors (ie potential

  2. Density effects on electronic configurations in dense plasmas

    NASA Astrophysics Data System (ADS)

    Faussurier, Gérald; Blancard, Christophe

    2018-02-01

    We present a quantum mechanical model to describe the density effects on electronic configurations inside a plasma environment. Two different approaches are given by starting from a quantum average-atom model. Illustrations are shown for an aluminum plasma in local thermodynamic equilibrium at solid density and at a temperature of 100 eV and in the thermodynamic conditions of a recent experiment designed to characterize the effects of the ionization potential depression treatment. Our approach compares well with experiment and is consistent in that case with the approach of Stewart and Pyatt to describe the ionization potential depression rather than with the method of Ecker and Kröll.

  3. Risk prediction model: Statistical and artificial neural network approach

    NASA Astrophysics Data System (ADS)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  4. Configurational coupled cluster approach with applications to magnetic model systems

    NASA Astrophysics Data System (ADS)

    Wu, Siyuan; Nooijen, Marcel

    2018-05-01

    A general exponential, coupled cluster like, approach is discussed to extract an effective Hamiltonian in configurational space, as a sum of 1-body, 2-body up to n-body operators. The simplest two-body approach is illustrated by calculations on simple magnetic model systems. A key feature of the approach is that equations up to a certain rank do not depend on higher body cluster operators.

  5. Towards a Viscous Wall Model for Immersed Boundary Methods

    NASA Technical Reports Server (NTRS)

    Brehm, Christoph; Barad, Michael F.; Kiris, Cetin C.

    2016-01-01

    Immersed boundary methods are frequently employed for simulating flows at low Reynolds numbers or for applications where viscous boundary layer effects can be neglected. The primary shortcoming of Cartesian mesh immersed boundary methods is the inability of efficiently resolving thin turbulent boundary layers in high-Reynolds number flow application. The inefficiency of resolving the thin boundary is associated with the use of constant aspect ratio Cartesian grid cells. Conventional CFD approaches can efficiently resolve the large wall normal gradients by utilizing large aspect ratio cells near the wall. This paper presents different approaches for immersed boundary methods to account for the viscous boundary layer interaction with the flow-field away from the walls. Different wall modeling approaches proposed in previous research studies are addressed and compared to a new integral boundary layer based approach. In contrast to common wall-modeling approaches that usually only utilize local flow information, the integral boundary layer based approach keeps the streamwise history of the boundary layer. This allows the method to remain effective at much larger y+ values than local wall modeling approaches. After a theoretical discussion of the different approaches, the method is applied to increasingly more challenging flow fields including fully attached, separated, and shock-induced separated (laminar and turbulent) flows.

  6. Probabilistic models for reactive behaviour in heterogeneous condensed phase media

    NASA Astrophysics Data System (ADS)

    Baer, M. R.; Gartling, D. K.; DesJardin, P. E.

    2012-02-01

    This work presents statistically-based models to describe reactive behaviour in heterogeneous energetic materials. Mesoscale effects are incorporated in continuum-level reactive flow descriptions using probability density functions (pdfs) that are associated with thermodynamic and mechanical states. A generalised approach is presented that includes multimaterial behaviour by treating the volume fraction as a random kinematic variable. Model simplifications are then sought to reduce the complexity of the description without compromising the statistical approach. Reactive behaviour is first considered for non-deformable media having a random temperature field as an initial state. A pdf transport relationship is derived and an approximate moment approach is incorporated in finite element analysis to model an example application whereby a heated fragment impacts a reactive heterogeneous material which leads to a delayed cook-off event. Modelling is then extended to include deformation effects associated with shock loading of a heterogeneous medium whereby random variables of strain, strain-rate and temperature are considered. A demonstrative mesoscale simulation of a non-ideal explosive is discussed that illustrates the joint statistical nature of the strain and temperature fields during shock loading to motivate the probabilistic approach. This modelling is derived in a Lagrangian framework that can be incorporated in continuum-level shock physics analysis. Future work will consider particle-based methods for a numerical implementation of this modelling approach.

  7. Modeling the surface tension of complex, reactive organic-inorganic mixtures

    NASA Astrophysics Data System (ADS)

    Schwier, A. N.; Viglione, G. A.; Li, Z.; McNeill, V. F.

    2013-01-01

    Atmospheric aerosols can contain thousands of organic compounds which impact aerosol surface tension, affecting aerosol properties such as cloud condensation nuclei (CCN) ability. We present new experimental data for the surface tension of complex, reactive organic-inorganic aqueous mixtures mimicking tropospheric aerosols. Each solution contained 2-6 organic compounds, including methylglyoxal, glyoxal, formaldehyde, acetaldehyde, oxalic acid, succinic acid, leucine, alanine, glycine, and serine, with and without ammonium sulfate. We test two surface tension models and find that most reactive, complex, aqueous organic mixtures which do not contain salt are well-described by a weighted Szyszkowski-Langmuir (S-L) model which was first presented by Henning et al. (2005). Two approaches for modeling the effects of salt were tested: (1) the Tuckermann approach (an extension of the Henning model with an additional explicit salt term), and (2) a new implicit method proposed here which employs experimental surface tension data obtained for each organic species in the presence of salt used with the Henning model. We recommend the use of method (2) for surface tension modeling because the Henning model (using data obtained from organic-inorganic systems) and Tuckermann approach provide similar modeling fits and goodness of fit (χ2) values, yet the Henning model is a simpler and more physical approach to modeling the effects of salt, requiring less empirically determined parameters.

  8. Instantiating the art of war for effects-based operations

    NASA Astrophysics Data System (ADS)

    Burns, Carla L.

    2002-07-01

    Effects-Based Operations (EBO) is a mindset, a philosophy and an approach for planning, executing and assessing military operations for the effects they produce rather than the targets or even objectives they deal with. An EBO approach strives to provide economy of force, dynamic tasking, and reduced collateral damage. The notion of EBO is not new. Military Commanders certainly have desired effects in mind when conducting military operations. However, to date EBO has been an art of war that lacks automated techniques and tools that enable effects-based analysis and assessment. Modeling and simulation is at the heart of this challenge. The Air Force Research Laboratory (AFRL) EBO Program is developing modeling techniques and corresponding tool capabilities that can be brought to bear against the challenges presented by effects-based analysis and assessment. Effects-based course-of-action development, center of gravity/target system analysis, and wargaming capabilities are being developed and integrated to help give Commanders the information decision support required to achieve desired national security objectives. This paper presents an introduction to effects-based operations, discusses the benefits of an EBO approach, and focuses on modeling and analysis for effects-based strategy development. An overview of modeling and simulation challenges for EBO is presented, setting the stage for the detailed technical papers in the subject session.

  9. A Computationally-Efficient Inverse Approach to Probabilistic Strain-Based Damage Diagnosis

    NASA Technical Reports Server (NTRS)

    Warner, James E.; Hochhalter, Jacob D.; Leser, William P.; Leser, Patrick E.; Newman, John A

    2016-01-01

    This work presents a computationally-efficient inverse approach to probabilistic damage diagnosis. Given strain data at a limited number of measurement locations, Bayesian inference and Markov Chain Monte Carlo (MCMC) sampling are used to estimate probability distributions of the unknown location, size, and orientation of damage. Substantial computational speedup is obtained by replacing a three-dimensional finite element (FE) model with an efficient surrogate model. The approach is experimentally validated on cracked test specimens where full field strains are determined using digital image correlation (DIC). Access to full field DIC data allows for testing of different hypothetical sensor arrangements, facilitating the study of strain-based diagnosis effectiveness as the distance between damage and measurement locations increases. The ability of the framework to effectively perform both probabilistic damage localization and characterization in cracked plates is demonstrated and the impact of measurement location on uncertainty in the predictions is shown. Furthermore, the analysis time to produce these predictions is orders of magnitude less than a baseline Bayesian approach with the FE method by utilizing surrogate modeling and effective numerical sampling approaches.

  10. Using the realist perspective to link theory from qualitative evidence synthesis to quantitative studies: Broadening the matrix approach.

    PubMed

    van Grootel, Leonie; van Wesel, Floryt; O'Mara-Eves, Alison; Thomas, James; Hox, Joop; Boeije, Hennie

    2017-09-01

    This study describes an approach for the use of a specific type of qualitative evidence synthesis in the matrix approach, a mixed studies reviewing method. The matrix approach compares quantitative and qualitative data on the review level by juxtaposing concrete recommendations from the qualitative evidence synthesis against interventions in primary quantitative studies. However, types of qualitative evidence syntheses that are associated with theory building generate theoretical models instead of recommendations. Therefore, the output from these types of qualitative evidence syntheses cannot directly be used for the matrix approach but requires transformation. This approach allows for the transformation of these types of output. The approach enables the inference of moderation effects instead of direct effects from the theoretical model developed in a qualitative evidence synthesis. Recommendations for practice are formulated on the basis of interactional relations inferred from the qualitative evidence synthesis. In doing so, we apply the realist perspective to model variables from the qualitative evidence synthesis according to the context-mechanism-outcome configuration. A worked example shows that it is possible to identify recommendations from a theory-building qualitative evidence synthesis using the realist perspective. We created subsets of the interventions from primary quantitative studies based on whether they matched the recommendations or not and compared the weighted mean effect sizes of the subsets. The comparison shows a slight difference in effect sizes between the groups of studies. The study concludes that the approach enhances the applicability of the matrix approach. Copyright © 2017 John Wiley & Sons, Ltd.

  11. Differentiating the Differentiation Models: A Comparison of the Retrieving Effectively from Memory Model (REM) and the Subjective Likelihood Model (SLiM)

    ERIC Educational Resources Information Center

    Criss, Amy H.; McClelland, James L.

    2006-01-01

    The subjective likelihood model [SLiM; McClelland, J. L., & Chappell, M. (1998). Familiarity breeds differentiation: a subjective-likelihood approach to the effects of experience in recognition memory. "Psychological Review," 105(4), 734-760.] and the retrieving effectively from memory model [REM; Shiffrin, R. M., & Steyvers, M. (1997). A model…

  12. Time-varying effect moderation using the structural nested mean model: estimation using inverse-weighted regression with residuals

    PubMed Central

    Almirall, Daniel; Griffin, Beth Ann; McCaffrey, Daniel F.; Ramchand, Rajeev; Yuen, Robert A.; Murphy, Susan A.

    2014-01-01

    This article considers the problem of examining time-varying causal effect moderation using observational, longitudinal data in which treatment, candidate moderators, and possible confounders are time varying. The structural nested mean model (SNMM) is used to specify the moderated time-varying causal effects of interest in a conditional mean model for a continuous response given time-varying treatments and moderators. We present an easy-to-use estimator of the SNMM that combines an existing regression-with-residuals (RR) approach with an inverse-probability-of-treatment weighting (IPTW) strategy. The RR approach has been shown to identify the moderated time-varying causal effects if the time-varying moderators are also the sole time-varying confounders. The proposed IPTW+RR approach provides estimators of the moderated time-varying causal effects in the SNMM in the presence of an additional, auxiliary set of known and measured time-varying confounders. We use a small simulation experiment to compare IPTW+RR versus the traditional regression approach and to compare small and large sample properties of asymptotic versus bootstrap estimators of the standard errors for the IPTW+RR approach. This article clarifies the distinction between time-varying moderators and time-varying confounders. We illustrate the methodology in a case study to assess if time-varying substance use moderates treatment effects on future substance use. PMID:23873437

  13. Effects of wildfire on catchment runoff response: a modeling approach to detect changes in snow-dominated forested catchments

    Treesearch

    Jan Seibert; Jeffrey J. McDonnell; Richard D. Woodsmith

    2010-01-01

    Wildfire is an important disturbance affecting hydrological processes through alteration of vegetation cover and soil characteristics. The effects of fire on hydrological systems at the catchment scale are not well known, largely because site specific data from both before and after wildfire are rare. In this study a modelling approach was employed for change detection...

  14. Meta-analysis of diagnostic test data: a bivariate Bayesian modeling approach.

    PubMed

    Verde, Pablo E

    2010-12-30

    In the last decades, the amount of published results on clinical diagnostic tests has expanded very rapidly. The counterpart to this development has been the formal evaluation and synthesis of diagnostic results. However, published results present substantial heterogeneity and they can be regarded as so far removed from the classical domain of meta-analysis, that they can provide a rather severe test of classical statistical methods. Recently, bivariate random effects meta-analytic methods, which model the pairs of sensitivities and specificities, have been presented from the classical point of view. In this work a bivariate Bayesian modeling approach is presented. This approach substantially extends the scope of classical bivariate methods by allowing the structural distribution of the random effects to depend on multiple sources of variability. Meta-analysis is summarized by the predictive posterior distributions for sensitivity and specificity. This new approach allows, also, to perform substantial model checking, model diagnostic and model selection. Statistical computations are implemented in the public domain statistical software (WinBUGS and R) and illustrated with real data examples. Copyright © 2010 John Wiley & Sons, Ltd.

  15. A multilevel modelling approach to analysis of patient costs under managed care.

    PubMed

    Carey, K

    2000-07-01

    The growth of the managed care model of health care delivery in the USA has led to broadened interest in the performance of health care providers. This paper uses multilevel modelling to analyse the effects of managed care penetration on patient level costs for a sample of 24 medical centres operated by the Veterans Health Administration (VHA). The appropriateness of a two level approach to this problem over ordinary least squares (OLS) is demonstrated. Results indicate a modicum of difference in institutions' performance after controlling for patient effects. Facilities more heavily penetrated by the managed care model may be more effective at controlling costs of their sicker patients. Copyright 2000 John Wiley & Sons, Ltd.

  16. Performance of stochastic approaches for forecasting river water quality.

    PubMed

    Ahmad, S; Khan, I H; Parida, B P

    2001-12-01

    This study analysed water quality data collected from the river Ganges in India from 1981 to 1990 for forecasting using stochastic models. Initially the box and whisker plots and Kendall's tau test were used to identify the trends during the study period. For detecting the possible intervention in the data the time series plots and cusum charts were used. The three approaches of stochastic modelling which account for the effect of seasonality in different ways. i.e. multiplicative autoregressive integrated moving average (ARIMA) model. deseasonalised model and Thomas-Fiering model were used to model the observed pattern in water quality. The multiplicative ARIMA model having both nonseasonal and seasonal components were, in general, identified as appropriate models. In the deseasonalised modelling approach, the lower order ARIMA models were found appropriate for the stochastic component. The set of Thomas-Fiering models were formed for each month for all water quality parameters. These models were then used to forecast the future values. The error estimates of forecasts from the three approaches were compared to identify the most suitable approach for the reliable forecast. The deseasonalised modelling approach was recommended for forecasting of water quality parameters of a river.

  17. A Physically Based Analytical Model to Describe Effective Excess Charge for Streaming Potential Generation in Water Saturated Porous Media

    NASA Astrophysics Data System (ADS)

    Guarracino, L.; Jougnot, D.

    2018-01-01

    Among the different contributions generating self-potential, the streaming potential is of particular interest in hydrogeology for its sensitivity to water flow. Estimating water flux in porous media using streaming potential data relies on our capacity to understand, model, and upscale the electrokinetic coupling at the mineral-solution interface. Different approaches have been proposed to predict streaming potential generation in porous media. One of these approaches is the flux averaging which is based on determining the excess charge which is effectively dragged in the medium by water flow. In this study, we develop a physically based analytical model to predict the effective excess charge in saturated porous media using a flux-averaging approach in a bundle of capillary tubes with a fractal pore size distribution. The proposed model allows the determination of the effective excess charge as a function of pore water ionic concentration and hydrogeological parameters like porosity, permeability, and tortuosity. The new model has been successfully tested against different set of experimental data from the literature. One of the main findings of this study is the mechanistic explanation to the empirical dependence between the effective excess charge and the permeability that has been found by several researchers. The proposed model also highlights the link to other lithological properties, and it is able to reproduce the evolution of effective excess charge with electrolyte concentrations.

  18. A social marketing approach to implementing evidence-based practice in VHA QUERI: the TIDES depression collaborative care model

    PubMed Central

    2009-01-01

    Abstract Collaborative care models for depression in primary care are effective and cost-effective, but difficult to spread to new sites. Translating Initiatives for Depression into Effective Solutions (TIDES) is an initiative to promote evidence-based collaborative care in the U.S. Veterans Health Administration (VHA). Social marketing applies marketing techniques to promote positive behavior change. Described in this paper, TIDES used a social marketing approach to foster national spread of collaborative care models. TIDES social marketing approach The approach relied on a sequential model of behavior change and explicit attention to audience segmentation. Segments included VHA national leadership, Veterans Integrated Service Network (VISN) regional leadership, facility managers, frontline providers, and veterans. TIDES communications, materials and messages targeted each segment, guided by an overall marketing plan. Results Depression collaborative care based on the TIDES model was adopted by VHA as part of the new Primary Care Mental Health Initiative and associated policies. It is currently in use in more than 50 primary care practices across the United States, and continues to spread, suggesting success for its social marketing-based dissemination strategy. Discussion and conclusion Development, execution and evaluation of the TIDES marketing effort shows that social marketing is a promising approach for promoting implementation of evidence-based interventions in integrated healthcare systems. PMID:19785754

  19. Capturing the Central Line Bundle Infection Prevention Interventions: Comparison of Reflective and Composite Modeling Methods.

    PubMed

    Gilmartin, Heather M; Sousa, Karen H; Battaglia, Catherine

    2016-01-01

    The central line (CL) bundle interventions are important for preventing central line-associated bloodstream infections (CLABSIs), but a modeling method for testing the CL bundle interventions within a health systems framework is lacking. Guided by the Quality Health Outcomes Model (QHOM), this study tested the CL bundle interventions in reflective and composite, latent, variable measurement models to assess the impact of the modeling approaches on an investigation of the relationships between adherence to the CL bundle interventions, organizational context, and CLABSIs. A secondary data analysis study was conducted using data from 614 U.S. hospitals that participated in the Prevention of Nosocomial Infection and Cost-Effectiveness Refined study. The sample was randomly split into exploration and validation subsets. The two CL bundle modeling approaches resulted in adequate fitting structural models (RMSEA = .04; CFI = .94) and supported similar relationships within the QHOM. Adherence to the CL bundle had a direct effect on organizational context (reflective = .23; composite = .20; p = .01) and CLABSIs (reflective = -.28; composite = -.25; p = .01). The relationship between context and CLABSIs was not significant. Both modeling methods resulted in partial support of the QHOM. There were little statistical, but large, conceptual differences between the reflective and composite modeling approaches. The empirical impact of the modeling approaches was inconclusive, for both models resulted in a good fit to the data. Lessons learned are presented. The comparison of modeling approaches is recommended when initially modeling variables that have never been modeled or with directional ambiguity to increase transparency and bring confidence to study findings.

  20. Capturing the Central Line Bundle Infection Prevention Interventions: Comparison of Reflective and Composite Modeling Methods

    PubMed Central

    Gilmartin, Heather M.; Sousa, Karen H.; Battaglia, Catherine

    2016-01-01

    Background The central line (CL) bundle interventions are important for preventing central line-associated bloodstream infections (CLABSIs), but a modeling method for testing the CL bundle interventions within a health systems framework is lacking. Objectives Guided by the Quality Health Outcomes Model (QHOM), this study tested the CL bundle interventions in reflective and composite, latent, variable measurement models to assess the impact of the modeling approaches on an investigation of the relationships between adherence to the CL bundle interventions, organizational context, and CLABSIs. Methods A secondary data analysis study was conducted using data from 614 U.S. hospitals that participated in the Prevention of Nosocomial Infection and Cost-Effectiveness-Refined study. The sample was randomly split into exploration and validation subsets. Results The two CL bundle modeling approaches resulted in adequate fitting structural models (RMSEA = .04; CFI = .94) and supported similar relationships within the QHOM. Adherence to the CL bundle had a direct effect on organizational context (reflective = .23; composite = .20; p = .01), and CLABSIs (reflective = −.28; composite = −.25; p =.01). The relationship between context and CLABSIs was not significant. Both modeling methods resulted in partial support of the QHOM. Discussion There were little statistical, but large, conceptual differences between the reflective and composite modeling approaches. The empirical impact of the modeling approaches was inconclusive, for both models resulted in a good fit to the data. Lessons learned are presented. The comparison of modeling approaches is recommended when initially modeling variables that have never been modeled, or with directional ambiguity, to increase transparency and bring confidence to study findings. PMID:27579507

  1. A short review of perfectionism in sport, dance and exercise: out with the old, in with the 2×2.

    PubMed

    Hill, Andrew P; Madigan, Daniel J

    2017-08-01

    The purpose of the current paper is to review research examining multidimensional perfectionism in sport, dance, and exercise. We start by providing a conceptual overview of perfectionism. We then describe three main approaches to examining perfectionism. These approaches are an independent effects approach, the tripartite model, and the 2×2 model of perfectionism. Alongside the description of each approach, research findings are summarized. We close the paper by explaining how the development of the 2×2 model has likely rendered the tripartite model obsolete. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. PROCRU: A model for analyzing flight crew procedures in approach to landing

    NASA Technical Reports Server (NTRS)

    Baron, S.; Zacharias, G.; Muraidharan, R.; Lancraft, R.

    1982-01-01

    A model for the human performance of approach and landing tasks that would provide a means for systematic exploration of questions concerning the impact of procedural and equipment design and the allocation of resources in the cockpit on performance and safety in approach-to-landing is discussed. A system model is needed that accounts for the interactions of crew, procedures, vehicle, approach geometry, and environment. The issues of interest revolve principally around allocation of tasks in the cockpit and crew performance with respect to the cognitive aspects of the tasks. The model must, therefore, deal effectively with information processing and decision-making aspects of human performance.

  3. Brain extraction from normal and pathological images: A joint PCA/Image-Reconstruction approach.

    PubMed

    Han, Xu; Kwitt, Roland; Aylward, Stephen; Bakas, Spyridon; Menze, Bjoern; Asturias, Alexander; Vespa, Paul; Van Horn, John; Niethammer, Marc

    2018-08-01

    Brain extraction from 3D medical images is a common pre-processing step. A variety of approaches exist, but they are frequently only designed to perform brain extraction from images without strong pathologies. Extracting the brain from images exhibiting strong pathologies, for example, the presence of a brain tumor or of a traumatic brain injury (TBI), is challenging. In such cases, tissue appearance may substantially deviate from normal tissue appearance and hence violates algorithmic assumptions for standard approaches to brain extraction; consequently, the brain may not be correctly extracted. This paper proposes a brain extraction approach which can explicitly account for pathologies by jointly modeling normal tissue appearance and pathologies. Specifically, our model uses a three-part image decomposition: (1) normal tissue appearance is captured by principal component analysis (PCA), (2) pathologies are captured via a total variation term, and (3) the skull and surrounding tissue is captured by a sparsity term. Due to its convexity, the resulting decomposition model allows for efficient optimization. Decomposition and image registration steps are alternated to allow statistical modeling of normal tissue appearance in a fixed atlas coordinate system. As a beneficial side effect, the decomposition model allows for the identification of potentially pathological areas and the reconstruction of a quasi-normal image in atlas space. We demonstrate the effectiveness of our approach on four datasets: the publicly available IBSR and LPBA40 datasets which show normal image appearance, the BRATS dataset containing images with brain tumors, and a dataset containing clinical TBI images. We compare the performance with other popular brain extraction models: ROBEX, BEaST, MASS, BET, BSE and a recently proposed deep learning approach. Our model performs better than these competing approaches on all four datasets. Specifically, our model achieves the best median (97.11) and mean (96.88) Dice scores over all datasets. The two best performing competitors, ROBEX and MASS, achieve scores of 96.23/95.62 and 96.67/94.25 respectively. Hence, our approach is an effective method for high quality brain extraction for a wide variety of images. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Modeling and predicting historical volatility in exchange rate markets

    NASA Astrophysics Data System (ADS)

    Lahmiri, Salim

    2017-04-01

    Volatility modeling and forecasting of currency exchange rate is an important task in several business risk management tasks; including treasury risk management, derivatives pricing, and portfolio risk evaluation. The purpose of this study is to present a simple and effective approach for predicting historical volatility of currency exchange rate. The approach is based on a limited set of technical indicators as inputs to the artificial neural networks (ANN). To show the effectiveness of the proposed approach, it was applied to forecast US/Canada and US/Euro exchange rates volatilities. The forecasting results show that our simple approach outperformed the conventional GARCH and EGARCH with different distribution assumptions, and also the hybrid GARCH and EGARCH with ANN in terms of mean absolute error, mean of squared errors, and Theil's inequality coefficient. Because of the simplicity and effectiveness of the approach, it is promising for US currency volatility prediction tasks.

  5. An effective model for ergonomic optimization applied to a new automotive assembly line

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duraccio, Vincenzo; Elia, Valerio; Forcina, Antonio

    2016-06-08

    An efficient ergonomic optimization can lead to a significant improvement in production performance and a considerable reduction of costs. In the present paper new model for ergonomic optimization is proposed. The new approach is based on the criteria defined by National Institute of Occupational Safety and Health and, adapted to Italian legislation. The proposed model provides an ergonomic optimization, by analyzing ergonomic relations between manual work in correct conditions. The model includes a schematic and systematic analysis method of the operations, and identifies all possible ergonomic aspects to be evaluated. The proposed approach has been applied to an automotive assemblymore » line, where the operation repeatability makes the optimization fundamental. The proposed application clearly demonstrates the effectiveness of the new approach.« less

  6. A Modified Dynamic Evolving Neural-Fuzzy Approach to Modeling Customer Satisfaction for Affective Design

    PubMed Central

    Kwong, C. K.; Fung, K. Y.; Jiang, Huimin; Chan, K. Y.

    2013-01-01

    Affective design is an important aspect of product development to achieve a competitive edge in the marketplace. A neural-fuzzy network approach has been attempted recently to model customer satisfaction for affective design and it has been proved to be an effective one to deal with the fuzziness and non-linearity of the modeling as well as generate explicit customer satisfaction models. However, such an approach to modeling customer satisfaction has two limitations. First, it is not suitable for the modeling problems which involve a large number of inputs. Second, it cannot adapt to new data sets, given that its structure is fixed once it has been developed. In this paper, a modified dynamic evolving neural-fuzzy approach is proposed to address the above mentioned limitations. A case study on the affective design of mobile phones was conducted to illustrate the effectiveness of the proposed methodology. Validation tests were conducted and the test results indicated that: (1) the conventional Adaptive Neuro-Fuzzy Inference System (ANFIS) failed to run due to a large number of inputs; (2) the proposed dynamic neural-fuzzy model outperforms the subtractive clustering-based ANFIS model and fuzzy c-means clustering-based ANFIS model in terms of their modeling accuracy and computational effort. PMID:24385884

  7. A modified dynamic evolving neural-fuzzy approach to modeling customer satisfaction for affective design.

    PubMed

    Kwong, C K; Fung, K Y; Jiang, Huimin; Chan, K Y; Siu, Kin Wai Michael

    2013-01-01

    Affective design is an important aspect of product development to achieve a competitive edge in the marketplace. A neural-fuzzy network approach has been attempted recently to model customer satisfaction for affective design and it has been proved to be an effective one to deal with the fuzziness and non-linearity of the modeling as well as generate explicit customer satisfaction models. However, such an approach to modeling customer satisfaction has two limitations. First, it is not suitable for the modeling problems which involve a large number of inputs. Second, it cannot adapt to new data sets, given that its structure is fixed once it has been developed. In this paper, a modified dynamic evolving neural-fuzzy approach is proposed to address the above mentioned limitations. A case study on the affective design of mobile phones was conducted to illustrate the effectiveness of the proposed methodology. Validation tests were conducted and the test results indicated that: (1) the conventional Adaptive Neuro-Fuzzy Inference System (ANFIS) failed to run due to a large number of inputs; (2) the proposed dynamic neural-fuzzy model outperforms the subtractive clustering-based ANFIS model and fuzzy c-means clustering-based ANFIS model in terms of their modeling accuracy and computational effort.

  8. Mitigation of multipath effect in GNSS short baseline positioning by the multipath hemispherical map

    NASA Astrophysics Data System (ADS)

    Dong, D.; Wang, M.; Chen, W.; Zeng, Z.; Song, L.; Zhang, Q.; Cai, M.; Cheng, Y.; Lv, J.

    2016-03-01

    Multipath is one major error source in high-accuracy GNSS positioning. Various hardware and software approaches are developed to mitigate the multipath effect. Among them the MHM (multipath hemispherical map) and sidereal filtering (SF)/advanced SF (ASF) approaches utilize the spatiotemporal repeatability of multipath effect under static environment, hence they can be implemented to generate multipath correction model for real-time GNSS data processing. We focus on the spatial-temporal repeatability-based MHM and SF/ASF approaches and compare their performances for multipath reduction. Comparisons indicate that both MHM and ASF approaches perform well with residual variance reduction (50 %) for short span (next 5 days) and maintains roughly 45 % reduction level for longer span (next 6-25 days). The ASF model is more suitable for high frequency multipath reduction, such as high-rate GNSS applications. The MHM model is easier to implement for real-time multipath mitigation when the overall multipath regime is medium to low frequency.

  9. Estimating Latent Variable Interactions With Non-Normal Observed Data: A Comparison of Four Approaches

    PubMed Central

    Cham, Heining; West, Stephen G.; Ma, Yue; Aiken, Leona S.

    2012-01-01

    A Monte Carlo simulation was conducted to investigate the robustness of four latent variable interaction modeling approaches (Constrained Product Indicator [CPI], Generalized Appended Product Indicator [GAPI], Unconstrained Product Indicator [UPI], and Latent Moderated Structural Equations [LMS]) under high degrees of non-normality of the observed exogenous variables. Results showed that the CPI and LMS approaches yielded biased estimates of the interaction effect when the exogenous variables were highly non-normal. When the violation of non-normality was not severe (normal; symmetric with excess kurtosis < 1), the LMS approach yielded the most efficient estimates of the latent interaction effect with the highest statistical power. In highly non-normal conditions, the GAPI and UPI approaches with ML estimation yielded unbiased latent interaction effect estimates, with acceptable actual Type-I error rates for both the Wald and likelihood ratio tests of interaction effect at N ≥ 500. An empirical example illustrated the use of the four approaches in testing a latent variable interaction between academic self-efficacy and positive family role models in the prediction of academic performance. PMID:23457417

  10. Use of a business excellence model to improve conservation programs.

    PubMed

    Black, Simon; Groombridge, Jim

    2010-12-01

    The current shortfall in effectiveness within conservation biology is illustrated by increasing interest in "evidence-based conservation," whose proponents have identified the need to benchmark conservation initiatives against actions that lead to proven positive effects. The effectiveness of conservation policies, approaches, and evaluation is under increasing scrutiny, and in these areas models of excellence used in business could prove valuable. Typically, conservation programs require years of effort and involve rigorous long-term implementation processes. Successful balance of long-term efforts alongside the achievement of short-term goals is often compromised by management or budgetary constraints, a situation also common in commercial businesses. "Business excellence" is an approach many companies have used over the past 20 years to ensure continued success. Various business excellence evaluations have been promoted that include concepts that could be adapted and applied in conservation programs. We describe a conservation excellence model that shows how scientific processes and results can be aligned with financial and organizational measures of success. We applied the model to two well-documented species conservation programs. In the first, the Po'ouli program, several aspects of improvement were identified, such as more authority for decision making in the field and better integration of habitat management and population recovery processes. The second example, the black-footed ferret program, could have benefited from leadership effort to reduce bureaucracy and to encourage use of best-practice species recovery approaches. The conservation excellence model enables greater clarity in goal setting, more-effective identification of job roles within programs, better links between technical approaches and measures of biological success, and more-effective use of resources. The model could improve evaluation of a conservation program's effectiveness and may be used to compare different programs, for example during reviews of project performance by sponsoring organizations. © 2010 Society for Conservation Biology.

  11. Sparse Group Penalized Integrative Analysis of Multiple Cancer Prognosis Datasets

    PubMed Central

    Liu, Jin; Huang, Jian; Xie, Yang; Ma, Shuangge

    2014-01-01

    SUMMARY In cancer research, high-throughput profiling studies have been extensively conducted, searching for markers associated with prognosis. Because of the “large d, small n” characteristic, results generated from the analysis of a single dataset can be unsatisfactory. Recent studies have shown that integrative analysis, which simultaneously analyzes multiple datasets, can be more effective than single-dataset analysis and classic meta-analysis. In most of existing integrative analysis, the homogeneity model has been assumed, which postulates that different datasets share the same set of markers. Several approaches have been designed to reinforce this assumption. In practice, different datasets may differ in terms of patient selection criteria, profiling techniques, and many other aspects. Such differences may make the homogeneity model too restricted. In this study, we assume the heterogeneity model, under which different datasets are allowed to have different sets of markers. With multiple cancer prognosis datasets, we adopt the AFT (accelerated failure time) model to describe survival. This model may have the lowest computational cost among popular semiparametric survival models. For marker selection, we adopt a sparse group MCP (minimax concave penalty) approach. This approach has an intuitive formulation and can be computed using an effective group coordinate descent algorithm. Simulation study shows that it outperforms the existing approaches under both the homogeneity and heterogeneity models. Data analysis further demonstrates the merit of heterogeneity model and proposed approach. PMID:23938111

  12. An approach to solving large reliability models

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Veeraraghavan, Malathi; Dugan, Joanne Bechta; Trivedi, Kishor S.

    1988-01-01

    This paper describes a unified approach to the problem of solving large realistic reliability models. The methodology integrates behavioral decomposition, state trunction, and efficient sparse matrix-based numerical methods. The use of fault trees, together with ancillary information regarding dependencies to automatically generate the underlying Markov model state space is proposed. The effectiveness of this approach is illustrated by modeling a state-of-the-art flight control system and a multiprocessor system. Nonexponential distributions for times to failure of components are assumed in the latter example. The modeling tool used for most of this analysis is HARP (the Hybrid Automated Reliability Predictor).

  13. Dynamic modeling and parameter estimation of a radial and loop type distribution system network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jun Qui; Heng Chen; Girgis, A.A.

    1993-05-01

    This paper presents a new identification approach to three-phase power system modeling and model reduction taking power system network as multi-input, multi-output (MIMO) processes. The model estimate can be obtained in discrete-time input-output form, discrete- or continuous-time state-space variable form, or frequency-domain impedance transfer function matrix form. An algorithm for determining the model structure of this MIMO process is described. The effect of measurement noise on the approach is also discussed. This approach has been applied on a sample system and simulation results are also presented in this paper.

  14. Does integration matter? A holistic model for building community resilience in Pakistan.

    PubMed

    Kanta Kafle, Shesh

    2017-01-01

    This paper analyses an integrated communitybased risk reduction model adopted by the Pakistan Red Crescent. The paper analyses the model's constructs and definitions, and provides a conceptual framework and a set of practical recommendations for building community resilience. The study uses the process of outcome-based resilience index to assess the effectiveness of the approach. The results indicate that the integrated programming approach is an effective way to build community resilience as it offers a number of tangible and longlasting benefits, including effective and efficient service delivery, local ownership, sustainability of results, and improved local resilience with respect to the shock and stress associated with disaster. The paper also outlines a set of recommendations for the effective and efficient use of the model for building community resilience in Pakistan.

  15. Modeling the economic impact of medication adherence in type 2 diabetes: a theoretical approach.

    PubMed

    Cobden, David S; Niessen, Louis W; Rutten, Frans Fh; Redekop, W Ken

    2010-09-07

    While strong correlations exist between medication adherence and health economic outcomes in type 2 diabetes, current economic analyses do not adequately consider them. We propose a new approach to incorporate adherence in cost-effectiveness analysis. We describe a theoretical approach to incorporating the effect of adherence when estimating the long-term costs and effectiveness of an antidiabetic medication. This approach was applied in a Markov model which includes common diabetic health states. We compared two treatments using hypothetical patient cohorts: injectable insulin (IDM) and oral (OAD) medications. Two analyses were performed, one which ignored adherence (analysis 1) and one which incorporated it (analysis 2). Results from the two analyses were then compared to explore the extent to which adherence may impact incremental cost-effectiveness ratios. In both analyses, IDM was more costly and more effective than OAD. When adherence was ignored, IDM generated an incremental cost-effectiveness of $12,097 per quality-adjusted life-year (QALY) gained versus OAD. Incorporation of adherence resulted in a slightly higher ratio ($16,241/QALY). This increase was primarily due to better adherence with OAD than with IDM, and the higher direct medical costs for IDM. Incorporating medication adherence into economic analyses can meaningfully influence the estimated cost-effectiveness of type 2 diabetes treatments, and should therefore be considered in health care decision-making. Future work on the impact of adherence on health economic outcomes, and validation of different approaches to modeling adherence, is warranted.

  16. Economic modeling of HIV treatments.

    PubMed

    Simpson, Kit N

    2010-05-01

    To review the general literature on microeconomic modeling and key points that must be considered in the general assessment of economic modeling reports, discuss the evolution of HIV economic models and identify models that illustrate this development over time, as well as examples of current studies. Recommend improvements in HIV economic modeling. Recent economic modeling studies of HIV include examinations of scaling up antiretroviral (ARV) in South Africa, screening prior to use of abacavir, preexposure prophylaxis, early start of ARV in developing countries and cost-effectiveness comparisons of specific ARV drugs using data from clinical trials. These studies all used extensively published second-generation Markov models in their analyses. There have been attempts to simplify approaches to cost-effectiveness estimates by using simple decision trees or cost-effectiveness calculations with short-time horizons. However, these approaches leave out important cumulative economic effects that will not appear early in a treatment. Many economic modeling studies were identified in the 'gray' literature, but limited descriptions precluded an assessment of their adherence to modeling guidelines, and thus to the validity of their findings. There is a need for developing third-generation models to accommodate new knowledge about adherence, adverse effects, and viral resistance.

  17. Model fit evaluation in multilevel structural equation models

    PubMed Central

    Ryu, Ehri

    2014-01-01

    Assessing goodness of model fit is one of the key questions in structural equation modeling (SEM). Goodness of fit is the extent to which the hypothesized model reproduces the multivariate structure underlying the set of variables. During the earlier development of multilevel structural equation models, the “standard” approach was to evaluate the goodness of fit for the entire model across all levels simultaneously. The model fit statistics produced by the standard approach have a potential problem in detecting lack of fit in the higher-level model for which the effective sample size is much smaller. Also when the standard approach results in poor model fit, it is not clear at which level the model does not fit well. This article reviews two alternative approaches that have been proposed to overcome the limitations of the standard approach. One is a two-step procedure which first produces estimates of saturated covariance matrices at each level and then performs single-level analysis at each level with the estimated covariance matrices as input (Yuan and Bentler, 2007). The other level-specific approach utilizes partially saturated models to obtain test statistics and fit indices for each level separately (Ryu and West, 2009). Simulation studies (e.g., Yuan and Bentler, 2007; Ryu and West, 2009) have consistently shown that both alternative approaches performed well in detecting lack of fit at any level, whereas the standard approach failed to detect lack of fit at the higher level. It is recommended that the alternative approaches are used to assess the model fit in multilevel structural equation model. Advantages and disadvantages of the two alternative approaches are discussed. The alternative approaches are demonstrated in an empirical example. PMID:24550882

  18. Discovering human germ cell mutagens with whole genome sequencing: Insights from power calculations reveal the importance of controlling for between-family variability.

    PubMed

    Webster, R J; Williams, A; Marchetti, F; Yauk, C L

    2018-07-01

    Mutations in germ cells pose potential genetic risks to offspring. However, de novo mutations are rare events that are spread across the genome and are difficult to detect. Thus, studies in this area have generally been under-powered, and no human germ cell mutagen has been identified. Whole Genome Sequencing (WGS) of human pedigrees has been proposed as an approach to overcome these technical and statistical challenges. WGS enables analysis of a much wider breadth of the genome than traditional approaches. Here, we performed power analyses to determine the feasibility of using WGS in human families to identify germ cell mutagens. Different statistical models were compared in the power analyses (ANOVA and multiple regression for one-child families, and mixed effect model sampling between two to four siblings per family). Assumptions were made based on parameters from the existing literature, such as the mutation-by-paternal age effect. We explored two scenarios: a constant effect due to an exposure that occurred in the past, and an accumulating effect where the exposure is continuing. Our analysis revealed the importance of modeling inter-family variability of the mutation-by-paternal age effect. Statistical power was improved by models accounting for the family-to-family variability. Our power analyses suggest that sufficient statistical power can be attained with 4-28 four-sibling families per treatment group, when the increase in mutations ranges from 40 to 10% respectively. Modeling family variability using mixed effect models provided a reduction in sample size compared to a multiple regression approach. Much larger sample sizes were required to detect an interaction effect between environmental exposures and paternal age. These findings inform study design and statistical modeling approaches to improve power and reduce sequencing costs for future studies in this area. Crown Copyright © 2018. Published by Elsevier B.V. All rights reserved.

  19. Linear mixed-effects modeling approach to FMRI group analysis

    PubMed Central

    Chen, Gang; Saad, Ziad S.; Britton, Jennifer C.; Pine, Daniel S.; Cox, Robert W.

    2013-01-01

    Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance–covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance–covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity for activation detection. The importance of hypothesis formulation is also illustrated in the simulations. Comparisons with alternative group analysis approaches and the limitations of LME are discussed in details. PMID:23376789

  20. Linear mixed-effects modeling approach to FMRI group analysis.

    PubMed

    Chen, Gang; Saad, Ziad S; Britton, Jennifer C; Pine, Daniel S; Cox, Robert W

    2013-06-01

    Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance-covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance-covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity for activation detection. The importance of hypothesis formulation is also illustrated in the simulations. Comparisons with alternative group analysis approaches and the limitations of LME are discussed in details. Published by Elsevier Inc.

  1. NONSTATIONARY SPATIAL MODELING OF ENVIRONMENTAL DATA USING A PROCESS CONVOLUTION APPROACH

    EPA Science Inventory

    Traditional approaches to modeling spatial processes involve the specification of the covariance structure of the field. Although such methods are straightforward to understand and effective in some situations, there are often problems in incorporating non-stationarity and in ma...

  2. Technical note: Comparison of methane ebullition modelling approaches used in terrestrial wetland models

    NASA Astrophysics Data System (ADS)

    Peltola, Olli; Raivonen, Maarit; Li, Xuefei; Vesala, Timo

    2018-02-01

    Emission via bubbling, i.e. ebullition, is one of the main methane (CH4) emission pathways from wetlands to the atmosphere. Direct measurement of gas bubble formation, growth and release in the peat-water matrix is challenging and in consequence these processes are relatively unknown and are coarsely represented in current wetland CH4 emission models. In this study we aimed to evaluate three ebullition modelling approaches and their effect on model performance. This was achieved by implementing the three approaches in one process-based CH4 emission model. All the approaches were based on some kind of threshold: either on CH4 pore water concentration (ECT), pressure (EPT) or free-phase gas volume (EBG) threshold. The model was run using 4 years of data from a boreal sedge fen and the results were compared with eddy covariance measurements of CH4 fluxes.

    Modelled annual CH4 emissions were largely unaffected by the different ebullition modelling approaches; however, temporal variability in CH4 emissions varied an order of magnitude between the approaches. Hence the ebullition modelling approach drives the temporal variability in modelled CH4 emissions and therefore significantly impacts, for instance, high-frequency (daily scale) model comparison and calibration against measurements. The modelling approach based on the most recent knowledge of the ebullition process (volume threshold, EBG) agreed the best with the measured fluxes (R2 = 0.63) and hence produced the most reasonable results, although there was a scale mismatch between the measurements (ecosystem scale with heterogeneous ebullition locations) and model results (single horizontally homogeneous peat column). The approach should be favoured over the two other more widely used ebullition modelling approaches and researchers are encouraged to implement it into their CH4 emission models.

  3. Investigation of tDCS volume conduction effects in a highly realistic head model

    NASA Astrophysics Data System (ADS)

    Wagner, S.; Rampersad, S. M.; Aydin, Ü.; Vorwerk, J.; Oostendorp, T. F.; Neuling, T.; Herrmann, C. S.; Stegeman, D. F.; Wolters, C. H.

    2014-02-01

    Objective. We investigate volume conduction effects in transcranial direct current stimulation (tDCS) and present a guideline for efficient and yet accurate volume conductor modeling in tDCS using our newly-developed finite element (FE) approach. Approach. We developed a new, accurate and fast isoparametric FE approach for high-resolution geometry-adapted hexahedral meshes and tissue anisotropy. To attain a deeper insight into tDCS, we performed computer simulations, starting with a homogenized three-compartment head model and extending this step by step to a six-compartment anisotropic model. Main results. We are able to demonstrate important tDCS effects. First, we find channeling effects of the skin, the skull spongiosa and the cerebrospinal fluid compartments. Second, current vectors tend to be oriented towards the closest higher conducting region. Third, anisotropic WM conductivity causes current flow in directions more parallel to the WM fiber tracts. Fourth, the highest cortical current magnitudes are not only found close to the stimulation sites. Fifth, the median brain current density decreases with increasing distance from the electrodes. Significance. Our results allow us to formulate a guideline for volume conductor modeling in tDCS. We recommend to accurately model the major tissues between the stimulating electrodes and the target areas, while for efficient yet accurate modeling, an exact representation of other tissues is less important. Because for the low-frequency regime in electrophysiology the quasi-static approach is justified, our results should also be valid for at least low-frequency (e.g., below 100 Hz) transcranial alternating current stimulation.

  4. Representation of Vegetation and Other Nonerodible Elements in Aeolian Shear Stress Partitioning Models for Predicting Transport Threshold

    NASA Technical Reports Server (NTRS)

    King, James; Nickling, William G.; Gillies, John A.

    2005-01-01

    The presence of nonerodible elements is well understood to be a reducing factor for soil erosion by wind, but the limits of its protection of the surface and erosion threshold prediction are complicated by the varying geometry, spatial organization, and density of the elements. The predictive capabilities of the most recent models for estimating wind driven particle fluxes are reduced because of the poor representation of the effectiveness of vegetation to reduce wind erosion. Two approaches have been taken to account for roughness effects on sediment transport thresholds. Marticorena and Bergametti (1995) in their dust emission model parameterize the effect of roughness on threshold with the assumption that there is a relationship between roughness density and the aerodynamic roughness length of a surface. Raupach et al. (1993) offer a different approach based on physical modeling of wake development behind individual roughness elements and the partition of the surface stress and the total stress over a roughened surface. A comparison between the models shows the partitioning approach to be a good framework to explain the effect of roughness on entrainment of sediment by wind. Both models provided very good agreement for wind tunnel experiments using solid objects on a nonerodible surface. However, the Marticorena and Bergametti (1995) approach displays a scaling dependency when the difference between the roughness length of the surface and the overall roughness length is too great, while the Raupach et al. (1993) model's predictions perform better owing to the incorporation of the roughness geometry and the alterations to the flow they can cause.

  5. Part 2. Development of Enhanced Statistical Methods for Assessing Health Effects Associated with an Unknown Number of Major Sources of Multiple Air Pollutants.

    PubMed

    Park, Eun Sug; Symanski, Elaine; Han, Daikwon; Spiegelman, Clifford

    2015-06-01

    A major difficulty with assessing source-specific health effects is that source-specific exposures cannot be measured directly; rather, they need to be estimated by a source-apportionment method such as multivariate receptor modeling. The uncertainty in source apportionment (uncertainty in source-specific exposure estimates and model uncertainty due to the unknown number of sources and identifiability conditions) has been largely ignored in previous studies. Also, spatial dependence of multipollutant data collected from multiple monitoring sites has not yet been incorporated into multivariate receptor modeling. The objectives of this project are (1) to develop a multipollutant approach that incorporates both sources of uncertainty in source-apportionment into the assessment of source-specific health effects and (2) to develop enhanced multivariate receptor models that can account for spatial correlations in the multipollutant data collected from multiple sites. We employed a Bayesian hierarchical modeling framework consisting of multivariate receptor models, health-effects models, and a hierarchical model on latent source contributions. For the health model, we focused on the time-series design in this project. Each combination of number of sources and identifiability conditions (additional constraints on model parameters) defines a different model. We built a set of plausible models with extensive exploratory data analyses and with information from previous studies, and then computed posterior model probability to estimate model uncertainty. Parameter estimation and model uncertainty estimation were implemented simultaneously by Markov chain Monte Carlo (MCMC*) methods. We validated the methods using simulated data. We illustrated the methods using PM2.5 (particulate matter ≤ 2.5 μm in aerodynamic diameter) speciation data and mortality data from Phoenix, Arizona, and Houston, Texas. The Phoenix data included counts of cardiovascular deaths and daily PM2.5 speciation data from 1995-1997. The Houston data included respiratory mortality data and 24-hour PM2.5 speciation data sampled every six days from a region near the Houston Ship Channel in years 2002-2005. We also developed a Bayesian spatial multivariate receptor modeling approach that, while simultaneously dealing with the unknown number of sources and identifiability conditions, incorporated spatial correlations in the multipollutant data collected from multiple sites into the estimation of source profiles and contributions based on the discrete process convolution model for multivariate spatial processes. This new modeling approach was applied to 24-hour ambient air concentrations of 17 volatile organic compounds (VOCs) measured at nine monitoring sites in Harris County, Texas, during years 2000 to 2005. Simulation results indicated that our methods were accurate in identifying the true model and estimated parameters were close to the true values. The results from our methods agreed in general with previous studies on the source apportionment of the Phoenix data in terms of estimated source profiles and contributions. However, we had a greater number of statistically insignificant findings, which was likely a natural consequence of incorporating uncertainty in the estimated source contributions into the health-effects parameter estimation. For the Houston data, a model with five sources (that seemed to be Sulfate-Rich Secondary Aerosol, Motor Vehicles, Industrial Combustion, Soil/Crustal Matter, and Sea Salt) showed the highest posterior model probability among the candidate models considered when fitted simultaneously to the PM2.5 and mortality data. There was a statistically significant positive association between respiratory mortality and same-day PM2.5 concentrations attributed to one of the sources (probably industrial combustion). The Bayesian spatial multivariate receptor modeling approach applied to the VOC data led to a highest posterior model probability for a model with five sources (that seemed to be refinery, petrochemical production, gasoline evaporation, natural gas, and vehicular exhaust) among several candidate models, with the number of sources varying between three and seven and with different identifiability conditions. Our multipollutant approach assessing source-specific health effects is more advantageous than a single-pollutant approach in that it can estimate total health effects from multiple pollutants and can also identify emission sources that are responsible for adverse health effects. Our Bayesian approach can incorporate not only uncertainty in the estimated source contributions, but also model uncertainty that has not been addressed in previous studies on assessing source-specific health effects. The new Bayesian spatial multivariate receptor modeling approach enables predictions of source contributions at unmonitored sites, minimizing exposure misclassification and providing improved exposure estimates along with their uncertainty estimates, as well as accounting for uncertainty in the number of sources and identifiability conditions.

  6. Students’ Achievement Goals, Learning-Related Emotions and Academic Achievement

    PubMed Central

    Lüftenegger, Marko; Klug, Julia; Harrer, Katharina; Langer, Marie; Spiel, Christiane; Schober, Barbara

    2016-01-01

    In the present research, the recently proposed 3 × 2 model of achievement goals is tested and associations with achievement emotions and their joint influence on academic achievement are investigated. The study was conducted with 388 students using the 3 × 2 Achievement Goal Questionnaire including the six proposed goal constructs (task-approach, task-avoidance, self-approach, self-avoidance, other-approach, other-avoidance) and the enjoyment and boredom scales from the Achievement Emotion Questionnaire. Exam grades were used as an indicator of academic achievement. Findings from CFAs provided strong support for the proposed structure of the 3 × 2 achievement goal model. Self-based goals, other-based goals and task-approach goals predicted enjoyment. Task-approach goals negatively predicted boredom. Task-approach and other-approach predicted achievement. The indirect effects of achievement goals through emotion variables on achievement were assessed using bias-corrected bootstrapping. No mediation effects were found. Implications for educational practice are discussed. PMID:27199836

  7. Extended Mixed-Efects Item Response Models with the MH-RM Algorithm

    ERIC Educational Resources Information Center

    Chalmers, R. Philip

    2015-01-01

    A mixed-effects item response theory (IRT) model is presented as a logical extension of the generalized linear mixed-effects modeling approach to formulating explanatory IRT models. Fixed and random coefficients in the extended model are estimated using a Metropolis-Hastings Robbins-Monro (MH-RM) stochastic imputation algorithm to accommodate for…

  8. A weakly-constrained data assimilation approach to address rainfall-runoff model structural inadequacy in streamflow prediction

    NASA Astrophysics Data System (ADS)

    Lee, Haksu; Seo, Dong-Jun; Noh, Seong Jin

    2016-11-01

    This paper presents a simple yet effective weakly-constrained (WC) data assimilation (DA) approach for hydrologic models which accounts for model structural inadequacies associated with rainfall-runoff transformation processes. Compared to the strongly-constrained (SC) DA, WC DA adjusts the control variables less while producing similarly or more accurate analysis. Hence the adjusted model states are dynamically more consistent with those of the base model. The inadequacy of a rainfall-runoff model was modeled as an additive error to runoff components prior to routing and penalized in the objective function. Two example modeling applications, distributed and lumped, were carried out to investigate the effects of the WC DA approach on DA results. For distributed modeling, the distributed Sacramento Soil Moisture Accounting (SAC-SMA) model was applied to the TIFM7 Basin in Missouri, USA. For lumped modeling, the lumped SAC-SMA model was applied to nineteen basins in Texas. In both cases, the variational DA (VAR) technique was used to assimilate discharge data at the basin outlet. For distributed SAC-SMA, spatially homogeneous error modeling yielded updated states that are spatially much more similar to the a priori states, as quantified by Earth Mover's Distance (EMD), than spatially heterogeneous error modeling by up to ∼10 times. DA experiments using both lumped and distributed SAC-SMA modeling indicated that assimilating outlet flow using the WC approach generally produce smaller mean absolute difference as well as higher correlation between the a priori and the updated states than the SC approach, while producing similar or smaller root mean square error of streamflow analysis and prediction. Large differences were found in both lumped and distributed modeling cases between the updated and the a priori lower zone tension and primary free water contents for both WC and SC approaches, indicating possible model structural deficiency in describing low flows or evapotranspiration processes for the catchments studied. Also presented are the findings from this study and key issues relevant to WC DA approaches using hydrologic models.

  9. Effects of Visual Cues of a Moving Model Predator on Body Patterns in Cuttlefish Sepia pharaonis.

    PubMed

    Okamoto, Kohei; Mori, Akira; Ikeda, Yuzuru

    2015-08-01

    We examined the effects of predator-prey distance (PPD) and trajectory of the predator on the body patterns that the pharaoh cuttlefish, Sepia pharaonis, shows in response to a predator. A model predator moving in three different trajectories was presented to the cuttlefish: T1, approached the cuttlefish but bypassed above; T2, approached directly toward the cuttlefish; T3, bypassed the cuttlefish both vertically and horizontally. We divided the body patterns that the cuttlefish expressed into seven categories, i.e., "uniform light", "disruptive", "center circle", "dark square", "vertical stripe", "all dark" and "eyespots". In T1, the number of individuals that showed "dark square" increased as the model approached the cuttlefish, whereas the number of individuals that showed "disruptive" decreased. In T2, the number of individuals that showed "all dark" and "eyespots" increased as the model approached the cuttlefish. In T3, the number of individuals that showed "dark square" and "vertical stripe" increased as the model approached the cuttlefish, and it tended to decrease as the model receded from the cuttlefish. These results demonstrate that S. pharaonis changes its body patterns according to PPD and the trajectory of the predator, which would affect predation risk and/or predator perception.

  10. Modeling Alaska boreal forests with a controlled trend surface approach

    Treesearch

    Mo Zhou; Jingjing Liang

    2012-01-01

    An approach of Controlled Trend Surface was proposed to simultaneously take into consideration large-scale spatial trends and nonspatial effects. A geospatial model of the Alaska boreal forest was developed from 446 permanent sample plots, which addressed large-scale spatial trends in recruitment, diameter growth, and mortality. The model was tested on two sets of...

  11. Robust Means Modeling: An Alternative for Hypothesis Testing of Independent Means under Variance Heterogeneity and Nonnormality

    ERIC Educational Resources Information Center

    Fan, Weihua; Hancock, Gregory R.

    2012-01-01

    This study proposes robust means modeling (RMM) approaches for hypothesis testing of mean differences for between-subjects designs in order to control the biasing effects of nonnormality and variance inequality. Drawing from structural equation modeling (SEM), the RMM approaches make no assumption of variance homogeneity and employ robust…

  12. The Effects of Cognitive Style on Edmodo Users' Behaviour: A Structural Equation Modeling-Based Multi-Group Analysis

    ERIC Educational Resources Information Center

    Ursavas, Omer Faruk; Reisoglu, Ilknur

    2017-01-01

    Purpose: The purpose of this paper is to explore the validity of extended technology acceptance model (TAM) in explaining pre-service teachers' Edmodo acceptance and the variation of variables related to TAM among pre-service teachers having different cognitive styles. Design/methodology/approach: Structural equation modeling approach was used to…

  13. Detecting Responses of Loblolly Pine Stand Development to Site-Preparation Intensity: A Modeling Approach

    Treesearch

    Mingguang Xu; Timothy B. Harrington; M. Boyd Edwards

    1997-01-01

    Data from an existing site preparation experiment in the Georgia Piedmont were subjected to a modeling approach to analyze effects of site preparation intensity on stand development of loblolly pine (Pinus taeda L.) 5 to 12 years since treatment. An average stand height model that incorporated indicator variables for treatment provided an accurate...

  14. Hirabayashi, Satoshi; Kroll, Charles N.; Nowak, David J. 2011. Component-based development and sensitivity analyses of an air pollutant dry deposition model. Environmental Modelling & Software. 26(6): 804-816.

    Treesearch

    Satoshi Hirabayashi; Chuck Kroll; David Nowak

    2011-01-01

    The Urban Forest Effects-Deposition model (UFORE-D) was developed with a component-based modeling approach. Functions of the model were separated into components that are responsible for user interface, data input/output, and core model functions. Taking advantage of the component-based approach, three UFORE-D applications were developed: a base application to estimate...

  15. Designing water demand management schemes using a socio-technical modelling approach.

    PubMed

    Baki, Sotiria; Rozos, Evangelos; Makropoulos, Christos

    2018-05-01

    Although it is now widely acknowledged that urban water systems (UWSs) are complex socio-technical systems and that a shift towards a socio-technical approach is critical in achieving sustainable urban water management, still, more often than not, UWSs are designed using a segmented modelling approach. As such, either the analysis focuses on the description of the purely technical sub-system, without explicitly taking into account the system's dynamic socio-economic processes, or a more interdisciplinary approach is followed, but delivered through relatively coarse models, which often fail to provide a thorough representation of the urban water cycle and hence cannot deliver accurate estimations of the hydrosystem's responses. In this work we propose an integrated modelling approach for the study of the complete socio-technical UWS that also takes into account socio-economic and climatic variability. We have developed an integrated model, which is used to investigate the diffusion of household water conservation technologies and its effects on the UWS, under different socio-economic and climatic scenarios. The integrated model is formed by coupling a System Dynamics model that simulates the water technology adoption process, and the Urban Water Optioneering Tool (UWOT) for the detailed simulation of the urban water cycle. The model and approach are tested and demonstrated in an urban redevelopment area in Athens, Greece under different socio-economic scenarios and policy interventions. It is suggested that the proposed approach can establish quantifiable links between socio-economic change and UWS responses and therefore assist decision makers in designing more effective and resilient long-term strategies for water conservation. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Embracing uncertainty in applied ecology.

    PubMed

    Milner-Gulland, E J; Shea, K

    2017-12-01

    Applied ecologists often face uncertainty that hinders effective decision-making.Common traps that may catch the unwary are: ignoring uncertainty, acknowledging uncertainty but ploughing on, focussing on trivial uncertainties, believing your models, and unclear objectives.We integrate research insights and examples from a wide range of applied ecological fields to illustrate advances that are generally underused, but could facilitate ecologists' ability to plan and execute research to support management.Recommended approaches to avoid uncertainty traps are: embracing models, using decision theory, using models more effectively, thinking experimentally, and being realistic about uncertainty. Synthesis and applications . Applied ecologists can become more effective at informing management by using approaches that explicitly take account of uncertainty.

  17. Detecting Unobserved Heterogeneity in the Relationship between Subjective Well-Being and Satisfaction in Various Domains of Life Using the REBUS-PLS Path Modelling Approach: A Case Study

    ERIC Educational Resources Information Center

    Zanin, Luca

    2013-01-01

    In this article, we propose a model to estimate the direct and indirect effects of the relationship between subjective well-being and satisfaction in various domains of life using a partial least squares path modelling approach in a structural equation model framework. A drawback of these models is that they assume homogeneous behaviour over the…

  18. A semi-supervised Support Vector Machine model for predicting the language outcomes following cochlear implantation based on pre-implant brain fMRI imaging.

    PubMed

    Tan, Lirong; Holland, Scott K; Deshpande, Aniruddha K; Chen, Ye; Choo, Daniel I; Lu, Long J

    2015-12-01

    We developed a machine learning model to predict whether or not a cochlear implant (CI) candidate will develop effective language skills within 2 years after the CI surgery by using the pre-implant brain fMRI data from the candidate. The language performance was measured 2 years after the CI surgery by the Clinical Evaluation of Language Fundamentals-Preschool, Second Edition (CELF-P2). Based on the CELF-P2 scores, the CI recipients were designated as either effective or ineffective CI users. For feature extraction from the fMRI data, we constructed contrast maps using the general linear model, and then utilized the Bag-of-Words (BoW) approach that we previously published to convert the contrast maps into feature vectors. We trained both supervised models and semi-supervised models to classify CI users as effective or ineffective. Compared with the conventional feature extraction approach, which used each single voxel as a feature, our BoW approach gave rise to much better performance for the classification of effective versus ineffective CI users. The semi-supervised model with the feature set extracted by the BoW approach from the contrast of speech versus silence achieved a leave-one-out cross-validation AUC as high as 0.97. Recursive feature elimination unexpectedly revealed that two features were sufficient to provide highly accurate classification of effective versus ineffective CI users based on our current dataset. We have validated the hypothesis that pre-implant cortical activation patterns revealed by fMRI during infancy correlate with language performance 2 years after cochlear implantation. The two brain regions highlighted by our classifier are potential biomarkers for the prediction of CI outcomes. Our study also demonstrated the superiority of the semi-supervised model over the supervised model. It is always worthwhile to try a semi-supervised model when unlabeled data are available.

  19. An integrative formal model of motivation and decision making: The MGPM*.

    PubMed

    Ballard, Timothy; Yeo, Gillian; Loft, Shayne; Vancouver, Jeffrey B; Neal, Andrew

    2016-09-01

    We develop and test an integrative formal model of motivation and decision making. The model, referred to as the extended multiple-goal pursuit model (MGPM*), is an integration of the multiple-goal pursuit model (Vancouver, Weinhardt, & Schmidt, 2010) and decision field theory (Busemeyer & Townsend, 1993). Simulations of the model generated predictions regarding the effects of goal type (approach vs. avoidance), risk, and time sensitivity on prioritization. We tested these predictions in an experiment in which participants pursued different combinations of approach and avoidance goals under different levels of risk. The empirical results were consistent with the predictions of the MGPM*. Specifically, participants pursuing 1 approach and 1 avoidance goal shifted priority from the approach to the avoidance goal over time. Among participants pursuing 2 approach goals, those with low time sensitivity prioritized the goal with the larger discrepancy, whereas those with high time sensitivity prioritized the goal with the smaller discrepancy. Participants pursuing 2 avoidance goals generally prioritized the goal with the smaller discrepancy. Finally, all of these effects became weaker as the level of risk increased. We used quantitative model comparison to show that the MGPM* explained the data better than the original multiple-goal pursuit model, and that the major extensions from the original model were justified. The MGPM* represents a step forward in the development of a general theory of decision making during multiple-goal pursuit. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  20. Assessing the Moral Coherence and Moral Robustness of Social Systems: Proof of Concept for a Graphical Models Approach.

    PubMed

    Hoss, Frauke; London, Alex John

    2016-12-01

    This paper presents a proof of concept for a graphical models approach to assessing the moral coherence and moral robustness of systems of social interactions. "Moral coherence" refers to the degree to which the rights and duties of agents within a system are effectively respected when agents in the system comply with the rights and duties that are recognized as in force for the relevant context of interaction. "Moral robustness" refers to the degree to which a system of social interaction is configured to ensure that the interests of agents are effectively respected even in the face of noncompliance. Using the case of conscientious objection of pharmacists to filling prescriptions for emergency contraception as an example, we illustrate how a graphical models approach can help stakeholders identify structural weaknesses in systems of social interaction and evaluate the relative merits of alternate organizational structures. By illustrating the merits of a graphical models approach we hope to spur further developments in this area.

  1. Application of maximum entropy to statistical inference for inversion of data from a single track segment.

    PubMed

    Stotts, Steven A; Koch, Robert A

    2017-08-01

    In this paper an approach is presented to estimate the constraint required to apply maximum entropy (ME) for statistical inference with underwater acoustic data from a single track segment. Previous algorithms for estimating the ME constraint require multiple source track segments to determine the constraint. The approach is relevant for addressing model mismatch effects, i.e., inaccuracies in parameter values determined from inversions because the propagation model does not account for all acoustic processes that contribute to the measured data. One effect of model mismatch is that the lowest cost inversion solution may be well outside a relatively well-known parameter value's uncertainty interval (prior), e.g., source speed from track reconstruction or towed source levels. The approach requires, for some particular parameter value, the ME constraint to produce an inferred uncertainty interval that encompasses the prior. Motivating this approach is the hypothesis that the proposed constraint determination procedure would produce a posterior probability density that accounts for the effect of model mismatch on inferred values of other inversion parameters for which the priors might be quite broad. Applications to both measured and simulated data are presented for model mismatch that produces minimum cost solutions either inside or outside some priors.

  2. A social marketing approach to implementing evidence-based practice in VHA QUERI: the TIDES depression collaborative care model.

    PubMed

    Luck, Jeff; Hagigi, Fred; Parker, Louise E; Yano, Elizabeth M; Rubenstein, Lisa V; Kirchner, JoAnn E

    2009-09-28

    Collaborative care models for depression in primary care are effective and cost-effective, but difficult to spread to new sites. Translating Initiatives for Depression into Effective Solutions (TIDES) is an initiative to promote evidence-based collaborative care in the U.S. Veterans Health Administration (VHA). Social marketing applies marketing techniques to promote positive behavior change. Described in this paper, TIDES used a social marketing approach to foster national spread of collaborative care models. The approach relied on a sequential model of behavior change and explicit attention to audience segmentation. Segments included VHA national leadership, Veterans Integrated Service Network (VISN) regional leadership, facility managers, frontline providers, and veterans. TIDES communications, materials and messages targeted each segment, guided by an overall marketing plan. Depression collaborative care based on the TIDES model was adopted by VHA as part of the new Primary Care Mental Health Initiative and associated policies. It is currently in use in more than 50 primary care practices across the United States, and continues to spread, suggesting success for its social marketing-based dissemination strategy. Development, execution and evaluation of the TIDES marketing effort shows that social marketing is a promising approach for promoting implementation of evidence-based interventions in integrated healthcare systems.

  3. Restricted spatial regression in practice: Geostatistical models, confounding, and robustness under model misspecification

    USGS Publications Warehouse

    Hanks, Ephraim M.; Schliep, Erin M.; Hooten, Mevin B.; Hoeting, Jennifer A.

    2015-01-01

    In spatial generalized linear mixed models (SGLMMs), covariates that are spatially smooth are often collinear with spatially smooth random effects. This phenomenon is known as spatial confounding and has been studied primarily in the case where the spatial support of the process being studied is discrete (e.g., areal spatial data). In this case, the most common approach suggested is restricted spatial regression (RSR) in which the spatial random effects are constrained to be orthogonal to the fixed effects. We consider spatial confounding and RSR in the geostatistical (continuous spatial support) setting. We show that RSR provides computational benefits relative to the confounded SGLMM, but that Bayesian credible intervals under RSR can be inappropriately narrow under model misspecification. We propose a posterior predictive approach to alleviating this potential problem and discuss the appropriateness of RSR in a variety of situations. We illustrate RSR and SGLMM approaches through simulation studies and an analysis of malaria frequencies in The Gambia, Africa.

  4. Comparing species distribution models constructed with different subsets of environmental predictors

    USGS Publications Warehouse

    Bucklin, David N.; Basille, Mathieu; Benscoter, Allison M.; Brandt, Laura A.; Mazzotti, Frank J.; Romañach, Stephanie S.; Speroterra, Carolina; Watling, James I.

    2014-01-01

    Our results indicate that additional predictors have relatively minor effects on the accuracy of climate-based species distribution models and minor to moderate effects on spatial predictions. We suggest that implementing species distribution models with only climate predictors may provide an effective and efficient approach for initial assessments of environmental suitability.

  5. Elliptic Relaxation of a Tensor Representation for the Redistribution Terms in a Reynolds Stress Turbulence Model

    NASA Technical Reports Server (NTRS)

    Carlson, J. R.; Gatski, T. B.

    2002-01-01

    A formulation to include the effects of wall proximity in a second-moment closure model that utilizes a tensor representation for the redistribution terms in the Reynolds stress equations is presented. The wall-proximity effects are modeled through an elliptic relaxation process of the tensor expansion coefficients that properly accounts for both correlation length and time scales as the wall is approached. Direct numerical simulation data and Reynolds stress solutions using a full differential approach are compared for the case of fully developed channel flow.

  6. Multi-physics modelling approach for oscillatory microengines: application for a microStirling generator design

    NASA Astrophysics Data System (ADS)

    Formosa, F.; Fréchette, L. G.

    2015-12-01

    An electrical circuit equivalent (ECE) approach has been set up allowing elementary oscillatory microengine components to be modelled. They cover gas channel/chamber thermodynamics, viscosity and thermal effects, mechanical structure and electromechanical transducers. The proposed tool has been validated on a centimeter scale Free Piston membrane Stirling engine [1]. We propose here new developments taking into account scaling effects to establish models suitable for any microengines. They are based on simplifications derived from the comparison of the hydraulic radius with respect to the viscous and thermal penetration depths respectively).

  7. Elliptic Relaxation of a Tensor Representation of the Pressure-Strain and Dissipation Rate

    NASA Technical Reports Server (NTRS)

    Carlson, John R.; Gatski, Thomas B.

    2002-01-01

    A formulation to include the effects of wall-proximity in a second moment closure model is presented that utilizes a tensor representation for the redistribution term in the Reynolds stress equations. The wall-proximity effects are modeled through an elliptic relaxation process of the tensor expansion coefficients that properly accounts for both correlation length and time scales as the wall is approached. DNS data and Reynolds stress solutions using a full differential approach at channel Reynolds number of 590 are compared to the new model.

  8. Endocrine disrupting chemicals in fish: developing exposure indicators and predictive models of effects based on mechanism of action.

    PubMed

    Ankley, Gerald T; Bencic, David C; Breen, Michael S; Collette, Timothy W; Conolly, Rory B; Denslow, Nancy D; Edwards, Stephen W; Ekman, Drew R; Garcia-Reyero, Natalia; Jensen, Kathleen M; Lazorchak, James M; Martinović, Dalma; Miller, David H; Perkins, Edward J; Orlando, Edward F; Villeneuve, Daniel L; Wang, Rong-Lin; Watanabe, Karen H

    2009-05-05

    Knowledge of possible toxic mechanisms (or modes) of action (MOA) of chemicals can provide valuable insights as to appropriate methods for assessing exposure and effects, thereby reducing uncertainties related to extrapolation across species, endpoints and chemical structure. However, MOA-based testing seldom has been used for assessing the ecological risk of chemicals. This is in part because past regulatory mandates have focused more on adverse effects of chemicals (reductions in survival, growth or reproduction) than the pathways through which these effects are elicited. A recent departure from this involves endocrine-disrupting chemicals (EDCs), where there is a need to understand both MOA and adverse outcomes. To achieve this understanding, advances in predictive approaches are required whereby mechanistic changes caused by chemicals at the molecular level can be translated into apical responses meaningful to ecological risk assessment. In this paper we provide an overview and illustrative results from a large, integrated project that assesses the effects of EDCs on two small fish models, the fathead minnow (Pimephales promelas) and zebrafish (Danio rerio). For this work a systems-based approach is being used to delineate toxicity pathways for 12 model EDCs with different known or hypothesized toxic MOA. The studies employ a combination of state-of-the-art genomic (transcriptomic, proteomic, metabolomic), bioinformatic and modeling approaches, in conjunction with whole animal testing, to develop response linkages across biological levels of organization. This understanding forms the basis for predictive approaches for species, endpoint and chemical extrapolation. Although our project is focused specifically on EDCs in fish, we believe that the basic conceptual approach has utility for systematically assessing exposure and effects of chemicals with other MOA across a variety of biological systems.

  9. The Effectiveness of Project Based Learning in Trigonometry

    NASA Astrophysics Data System (ADS)

    Gerhana, M. T. C.; Mardiyana, M.; Pramudya, I.

    2017-09-01

    This research aimed to explore the effectiveness of Project-Based Learning (PjBL) with scientific approach viewed from interpersonal intelligence toward students’ achievement learning in mathematics. This research employed quasi experimental research. The subjects of this research were grade X MIPA students in Sleman Yogyakarta. The result of the research showed that project-based learning model is more effective to generate students’ mathematics learning achievement that classical model with scientific approach. This is because in PjBL model students are more able to think actively and creatively. Students are faced with a pleasant atmosphere to solve a problem in everyday life. The use of project-based learning model is expected to be the choice of teachers to improve mathematics education.

  10. Full Bayes Poisson gamma, Poisson lognormal, and zero inflated random effects models: Comparing the precision of crash frequency estimates.

    PubMed

    Aguero-Valverde, Jonathan

    2013-01-01

    In recent years, complex statistical modeling approaches have being proposed to handle the unobserved heterogeneity and the excess of zeros frequently found in crash data, including random effects and zero inflated models. This research compares random effects, zero inflated, and zero inflated random effects models using a full Bayes hierarchical approach. The models are compared not just in terms of goodness-of-fit measures but also in terms of precision of posterior crash frequency estimates since the precision of these estimates is vital for ranking of sites for engineering improvement. Fixed-over-time random effects models are also compared to independent-over-time random effects models. For the crash dataset being analyzed, it was found that once the random effects are included in the zero inflated models, the probability of being in the zero state is drastically reduced, and the zero inflated models degenerate to their non zero inflated counterparts. Also by fixing the random effects over time the fit of the models and the precision of the crash frequency estimates are significantly increased. It was found that the rankings of the fixed-over-time random effects models are very consistent among them. In addition, the results show that by fixing the random effects over time, the standard errors of the crash frequency estimates are significantly reduced for the majority of the segments on the top of the ranking. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. An Assessment of Five Modeling Approaches for Thermo-Mechanical Stress Analysis of Laminated Composite Panels

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Malik, M.

    2000-01-01

    A study is made of the effects of variation in the lamination and geometric parameters, and boundary conditions of multi-layered composite panels on the accuracy of the detailed response characteristics obtained by five different modeling approaches. The modeling approaches considered include four two-dimensional models, each with five parameters to characterize the deformation in the thickness direction, and a predictor-corrector approach with twelve displacement parameters. The two-dimensional models are first-order shear deformation theory, third-order theory; a theory based on trigonometric variation of the transverse shear stresses through the thickness, and a discrete layer theory. The combination of the following four key elements distinguishes the present study from previous studies reported in the literature: (1) the standard of comparison is taken to be the solutions obtained by using three-dimensional continuum models for each of the individual layers; (2) both mechanical and thermal loadings are considered; (3) boundary conditions other than simply supported edges are considered; and (4) quantities compared include detailed through-the-thickness distributions of transverse shear and transverse normal stresses. Based on the numerical studies conducted, the predictor-corrector approach appears to be the most effective technique for obtaining accurate transverse stresses, and for thermal loading, none of the two-dimensional models is adequate for calculating transverse normal stresses, even when used in conjunction with three-dimensional equilibrium equations.

  12. Modeling the surface tension of complex, reactive organic-inorganic mixtures

    NASA Astrophysics Data System (ADS)

    Schwier, A. N.; Viglione, G. A.; Li, Z.; McNeill, V. Faye

    2013-11-01

    Atmospheric aerosols can contain thousands of organic compounds which impact aerosol surface tension, affecting aerosol properties such as heterogeneous reactivity, ice nucleation, and cloud droplet formation. We present new experimental data for the surface tension of complex, reactive organic-inorganic aqueous mixtures mimicking tropospheric aerosols. Each solution contained 2-6 organic compounds, including methylglyoxal, glyoxal, formaldehyde, acetaldehyde, oxalic acid, succinic acid, leucine, alanine, glycine, and serine, with and without ammonium sulfate. We test two semi-empirical surface tension models and find that most reactive, complex, aqueous organic mixtures which do not contain salt are well described by a weighted Szyszkowski-Langmuir (S-L) model which was first presented by Henning et al. (2005). Two approaches for modeling the effects of salt were tested: (1) the Tuckermann approach (an extension of the Henning model with an additional explicit salt term), and (2) a new implicit method proposed here which employs experimental surface tension data obtained for each organic species in the presence of salt used with the Henning model. We recommend the use of method (2) for surface tension modeling of aerosol systems because the Henning model (using data obtained from organic-inorganic systems) and Tuckermann approach provide similar modeling results and goodness-of-fit (χ2) values, yet the Henning model is a simpler and more physical approach to modeling the effects of salt, requiring less empirically determined parameters.

  13. Detection of epistatic effects with logic regression and a classical linear regression model.

    PubMed

    Malina, Magdalena; Ickstadt, Katja; Schwender, Holger; Posch, Martin; Bogdan, Małgorzata

    2014-02-01

    To locate multiple interacting quantitative trait loci (QTL) influencing a trait of interest within experimental populations, usually methods as the Cockerham's model are applied. Within this framework, interactions are understood as the part of the joined effect of several genes which cannot be explained as the sum of their additive effects. However, if a change in the phenotype (as disease) is caused by Boolean combinations of genotypes of several QTLs, this Cockerham's approach is often not capable to identify them properly. To detect such interactions more efficiently, we propose a logic regression framework. Even though with the logic regression approach a larger number of models has to be considered (requiring more stringent multiple testing correction) the efficient representation of higher order logic interactions in logic regression models leads to a significant increase of power to detect such interactions as compared to a Cockerham's approach. The increase in power is demonstrated analytically for a simple two-way interaction model and illustrated in more complex settings with simulation study and real data analysis.

  14. Latent spatial models and sampling design for landscape genetics

    USGS Publications Warehouse

    Hanks, Ephraim M.; Hooten, Mevin B.; Knick, Steven T.; Oyler-McCance, Sara J.; Fike, Jennifer A.; Cross, Todd B.; Schwartz, Michael K.

    2016-01-01

    We propose a spatially-explicit approach for modeling genetic variation across space and illustrate how this approach can be used to optimize spatial prediction and sampling design for landscape genetic data. We propose a multinomial data model for categorical microsatellite allele data commonly used in landscape genetic studies, and introduce a latent spatial random effect to allow for spatial correlation between genetic observations. We illustrate how modern dimension reduction approaches to spatial statistics can allow for efficient computation in landscape genetic statistical models covering large spatial domains. We apply our approach to propose a retrospective spatial sampling design for greater sage-grouse (Centrocercus urophasianus) population genetics in the western United States.

  15. Impact of the Systemic Approach on Literacy Achievement of Jordanian 1st Graders at Mu'tah University Model School

    ERIC Educational Resources Information Center

    Al-Hajaya, Nail

    2012-01-01

    This study investigates the effect of the systemic approach in literacy achievement of the first grade students at Mu'tah University's Model School. The sample (N = 45) consisted of all first grade students, who were assigned into two groups; a control group taught traditionally while the other group was exposed to the system approach during the…

  16. Industry-Cost-Curve Approach for Modeling the Environmental Impact of Introducing New Technologies in Life Cycle Assessment.

    PubMed

    Kätelhön, Arne; von der Assen, Niklas; Suh, Sangwon; Jung, Johannes; Bardow, André

    2015-07-07

    The environmental costs and benefits of introducing a new technology depend not only on the technology itself, but also on the responses of the market where substitution or displacement of competing technologies may occur. An internationally accepted method taking both technological and market-mediated effects into account, however, is still lacking in life cycle assessment (LCA). For the introduction of a new technology, we here present a new approach for modeling the environmental impacts within the framework of LCA. Our approach is motivated by consequential life cycle assessment (CLCA) and aims to contribute to the discussion on how to operationalize consequential thinking in LCA practice. In our approach, we focus on new technologies producing homogeneous products such as chemicals or raw materials. We employ the industry cost-curve (ICC) for modeling market-mediated effects. Thereby, we can determine substitution effects at a level of granularity sufficient to distinguish between competing technologies. In our approach, a new technology alters the ICC potentially replacing the highest-cost producer(s). The technologies that remain competitive after the new technology's introduction determine the new environmental impact profile of the product. We apply our approach in a case study on a new technology for chlor-alkali electrolysis to be introduced in Germany.

  17. The Effects of Sand Sediment Volume Heterogeneities on Sound Propagation and Scattering

    DTIC Science & Technology

    2011-09-01

    previously developed at APL- UW for the study of high-frequency acoustics . These models include perturbation models applied to scattering from the...shell shapes (Figure 1). The acoustic modeling to this point has utilized Ivakin’s unified approach to volume and roughness scattering [3...sediments: A modeling approach and application to a shelly sand-mud environment,” in the Proceeding of the European Conference on Underwater Acoustics

  18. Developing a model for effective leadership in healthcare: a concept mapping approach.

    PubMed

    Hargett, Charles William; Doty, Joseph P; Hauck, Jennifer N; Webb, Allison Mb; Cook, Steven H; Tsipis, Nicholas E; Neumann, Julie A; Andolsek, Kathryn M; Taylor, Dean C

    2017-01-01

    Despite increasing awareness of the importance of leadership in healthcare, our understanding of the competencies of effective leadership remains limited. We used a concept mapping approach (a blend of qualitative and quantitative analysis of group processes to produce a visual composite of the group's ideas) to identify stakeholders' mental model of effective healthcare leadership, clarifying the underlying structure and importance of leadership competencies. Literature review, focus groups, and consensus meetings were used to derive a representative set of healthcare leadership competency statements. Study participants subsequently sorted and rank-ordered these statements based on their perceived importance in contributing to effective healthcare leadership in real-world settings. Hierarchical cluster analysis of individual sortings was used to develop a coherent model of effective leadership in healthcare. A diverse group of 92 faculty and trainees individually rank-sorted 33 leadership competency statements. The highest rated statements were "Acting with Personal Integrity", "Communicating Effectively", "Acting with Professional Ethical Values", "Pursuing Excellence", "Building and Maintaining Relationships", and "Thinking Critically". Combining the results from hierarchical cluster analysis with our qualitative data led to a healthcare leadership model based on the core principle of Patient Centeredness and the core competencies of Integrity, Teamwork, Critical Thinking, Emotional Intelligence, and Selfless Service. Using a mixed qualitative-quantitative approach, we developed a graphical representation of a shared leadership model derived in the healthcare setting. This model may enhance learning, teaching, and patient care in this important area, as well as guide future research.

  19. Predicting the F(ab)-mediated effect of monoclonal antibodies in vivo by combining cell-level kinetic and pharmacokinetic modelling.

    PubMed

    Krippendorff, Ben-Fillippo; Oyarzún, Diego A; Huisinga, Wilhelm

    2012-04-01

    Cell-level kinetic models for therapeutically relevant processes increasingly benefit the early stages of drug development. Later stages of the drug development processes, however, rely on pharmacokinetic compartment models while cell-level dynamics are typically neglected. We here present a systematic approach to integrate cell-level kinetic models and pharmacokinetic compartment models. Incorporating target dynamics into pharmacokinetic models is especially useful for the development of therapeutic antibodies because their effect and pharmacokinetics are inherently interdependent. The approach is illustrated by analysing the F(ab)-mediated inhibitory effect of therapeutic antibodies targeting the epidermal growth factor receptor. We build a multi-level model for anti-EGFR antibodies by combining a systems biology model with in vitro determined parameters and a pharmacokinetic model based on in vivo pharmacokinetic data. Using this model, we investigated in silico the impact of biochemical properties of anti-EGFR antibodies on their F(ab)-mediated inhibitory effect. The multi-level model suggests that the F(ab)-mediated inhibitory effect saturates with increasing drug-receptor affinity, thereby limiting the impact of increasing antibody affinity on improving the effect. This indicates that observed differences in the therapeutic effects of high affinity antibodies in the market and in clinical development may result mainly from Fc-mediated indirect mechanisms such as antibody-dependent cell cytotoxicity.

  20. A special case of reduced rank models for identification and modelling of time varying effects in survival analysis.

    PubMed

    Perperoglou, Aris

    2016-12-10

    Flexible survival models are in need when modelling data from long term follow-up studies. In many cases, the assumption of proportionality imposed by a Cox model will not be valid. Instead, a model that can identify time varying effects of fixed covariates can be used. Although there are several approaches that deal with this problem, it is not always straightforward how to choose which covariates should be modelled having time varying effects and which not. At the same time, it is up to the researcher to define appropriate time functions that describe the dynamic pattern of the effects. In this work, we suggest a model that can deal with both fixed and time varying effects and uses simple hypotheses tests to distinguish which covariates do have dynamic effects. The model is an extension of the parsimonious reduced rank model of rank 1. As such, the number of parameters is kept low, and thus, a flexible set of time functions, such as b-splines, can be used. The basic theory is illustrated along with an efficient fitting algorithm. The proposed method is applied to a dataset of breast cancer patients and compared with a multivariate fractional polynomials approach for modelling time-varying effects. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  1. Imputational Modeling of Spatial Context and Social Environmental Predictors of Walking in an Underserved Community: The PATH Trial

    PubMed Central

    Ellerbe, Caitlyn; Lawson, Andrew B.; Alia, Kassandra A.; Meyers, Duncan C.; Coulon, Sandra M.; Lawman, Hannah G.

    2013-01-01

    Background This study examined imputational modeling effects of spatial proximity and social factors of walking in African American adults. Purpose Models were compared that examined relationships between household proximity to a walking trail and social factors in determining walking status. Methods Participants (N=133; 66% female; mean age=55 yrs) were recruited to a police-supported walking and social marketing intervention. Bayesian modeling was used to identify predictors of walking at 12 months. Results Sensitivity analysis using different imputation approaches, and spatial contextual effects, were compared. All the imputation methods showed social life and income were significant predictors of walking, however, the complete data approach was the best model indicating Age (1.04, 95% OR: 1.00, 1.08), Social Life (0.83, 95% OR: 0.69, 0.98) and Income > $10,000 (0.10, 95% OR: 0.01, 0.97) were all predictors of walking. Conclusions The complete data approach was the best model of predictors of walking in African Americans. PMID:23481250

  2. Imputational modeling of spatial context and social environmental predictors of walking in an underserved community: the PATH trial.

    PubMed

    Wilson, Dawn K; Ellerbe, Caitlyn; Lawson, Andrew B; Alia, Kassandra A; Meyers, Duncan C; Coulon, Sandra M; Lawman, Hannah G

    2013-03-01

    This study examined imputational modeling effects of spatial proximity and social factors of walking in African American adults. Models were compared that examined relationships between household proximity to a walking trail and social factors in determining walking status. Participants (N=133; 66% female; mean age=55 years) were recruited to a police-supported walking and social marketing intervention. Bayesian modeling was used to identify predictors of walking at 12 months. Sensitivity analysis using different imputation approaches, and spatial contextual effects, were compared. All the imputation methods showed social life and income were significant predictors of walking, however, the complete data approach was the best model indicating Age (1.04, 95% OR: 1.00, 1.08), Social Life (0.83, 95% OR: 0.69, 0.98) and Income <$10,000 (0.10, 95% OR: 0.01, 0.97) were all predictors of walking. The complete data approach was the best model of predictors of walking in African Americans. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Use of upscaled elevation and surface roughness data in two-dimensional surface water models

    USGS Publications Warehouse

    Hughes, J.D.; Decker, J.D.; Langevin, C.D.

    2011-01-01

    In this paper, we present an approach that uses a combination of cell-block- and cell-face-averaging of high-resolution cell elevation and roughness data to upscale hydraulic parameters and accurately simulate surface water flow in relatively low-resolution numerical models. The method developed allows channelized features that preferentially connect large-scale grid cells at cell interfaces to be represented in models where these features are significantly smaller than the selected grid size. The developed upscaling approach has been implemented in a two-dimensional finite difference model that solves a diffusive wave approximation of the depth-integrated shallow surface water equations using preconditioned Newton–Krylov methods. Computational results are presented to show the effectiveness of the mixed cell-block and cell-face averaging upscaling approach in maintaining model accuracy, reducing model run-times, and how decreased grid resolution affects errors. Application examples demonstrate that sub-grid roughness coefficient variations have a larger effect on simulated error than sub-grid elevation variations.

  4. An Extended Petri-Net Based Approach for Supply Chain Process Enactment in Resource-Centric Web Service Environment

    NASA Astrophysics Data System (ADS)

    Wang, Xiaodong; Zhang, Xiaoyu; Cai, Hongming; Xu, Boyi

    Enacting a supply-chain process involves variant partners and different IT systems. REST receives increasing attention for distributed systems with loosely coupled resources. Nevertheless, resource model incompatibilities and conflicts prevent effective process modeling and deployment in resource-centric Web service environment. In this paper, a Petri-net based framework for supply-chain process integration is proposed. A resource meta-model is constructed to represent the basic information of resources. Then based on resource meta-model, XML schemas and documents are derived, which represent resources and their states in Petri-net. Thereafter, XML-net, a high level Petri-net, is employed for modeling control and data flow of process. From process model in XML-net, RESTful services and choreography descriptions are deduced. Therefore, unified resource representation and RESTful services description are proposed for cross-system integration in a more effective way. A case study is given to illustrate the approach and the desirable features of the approach are discussed.

  5. A preliminary comparison of hydrodynamic approaches for flood inundation modeling of urban areas in Jakarta Ciliwung river basin

    NASA Astrophysics Data System (ADS)

    Rojali, Aditia; Budiaji, Abdul Somat; Pribadi, Yudhistira Satya; Fatria, Dita; Hadi, Tri Wahyu

    2017-07-01

    This paper addresses on the numerical modeling approaches for flood inundation in urban areas. Decisive strategy to choose between 1D, 2D or even a hybrid 1D-2D model is more than important to optimize flood inundation analyses. To find cost effective yet robust and accurate model has been our priority and motivation in the absence of available High Performance Computing facilities. The application of 1D, 1D/2D and full 2D modeling approach to river flood study in Jakarta Ciliwung river basin, and a comparison of approaches benchmarked for the inundation study are presented. This study demonstrate the successful use of 1D/2D and 2D system to model Jakarta Ciliwung river basin in terms of inundation results and computational aspect. The findings of the study provide an interesting comparison between modeling approaches, HEC-RAS 1D, 1D-2D, 2D, and ANUGA when benchmarked to the Manggarai water level measurement.

  6. Quantifying second generation ethanol inhibition: Design of Experiments approach and kinetic model development.

    PubMed

    Schneiderman, Steven J; Johnson, Roger W; Menkhaus, Todd J; Gilcrease, Patrick C

    2015-03-01

    While softwoods represent a potential feedstock for second generation ethanol production, compounds present in their hydrolysates can inhibit fermentation. In this study, a novel Design of Experiments (DoE) approach was used to identify significant inhibitory effects on Saccharomyces cerevisiae D5A for the purpose of guiding kinetic model development. Although acetic acid, furfural and 5-hydroxymethyl furfural (HMF) were present at potentially inhibitory levels, initial factorial experiments only identified ethanol as a significant rate inhibitor. It was hypothesized that high ethanol levels masked the effects of other inhibitors, and a subsequent factorial design without ethanol found significant effects for all other compounds. When these non-ethanol effects were accounted for in the kinetic model, R¯(2) was significantly improved over an ethanol-inhibition only model (R¯(2)=0.80 vs. 0.76). In conclusion, when ethanol masking effects are removed, DoE is a valuable tool to identify significant non-ethanol inhibitors and guide kinetic model development. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Effectiveness of Training Model Capacity Building for Entrepreneurship Women Based Empowerment Community

    ERIC Educational Resources Information Center

    Idawati; Mahmud, Alimuddin; Dirawan, Gufran Darma

    2016-01-01

    The purpose of this research was to determine the effectiveness of a training model for capacity building of women entrepreneurship community-based. Research type approach Research and Development Model, which refers to the model of development research that developed by Romiszowki (1996) combined with a model of development Sugiono (2011) it was…

  8. Modeling fuels and fire effects in 3D: Model description and applications

    Treesearch

    Francois Pimont; Russell Parsons; Eric Rigolot; Francois de Coligny; Jean-Luc Dupuy; Philippe Dreyfus; Rodman R. Linn

    2016-01-01

    Scientists and managers critically need ways to assess how fuel treatments alter fire behavior, yet few tools currently exist for this purpose.We present a spatially-explicit-fuel-modeling system, FuelManager, which models fuels, vegetation growth, fire behavior (using a physics-based model, FIRETEC), and fire effects. FuelManager's flexible approach facilitates...

  9. Emerging systems biology approaches in nanotoxicology: Towards a mechanism-based understanding of nanomaterial hazard and risk

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costa, Pedro M.; Fadeel, Bengt, E-mail: Bengt.Fade

    Engineered nanomaterials are being developed for a variety of technological applications. However, the increasing use of nanomaterials in society has led to concerns about their potential adverse effects on human health and the environment. During the first decade of nanotoxicological research, the realization has emerged that effective risk assessment of the multitudes of new nanomaterials would benefit from a comprehensive understanding of their toxicological mechanisms, which is difficult to achieve with traditional, low-throughput, single end-point oriented approaches. Therefore, systems biology approaches are being progressively applied within the nano(eco)toxicological sciences. This novel paradigm implies that the study of biological systems shouldmore » be integrative resulting in quantitative and predictive models of nanomaterial behaviour in a biological system. To this end, global ‘omics’ approaches with which to assess changes in genes, proteins, metabolites, etc. are deployed allowing for computational modelling of the biological effects of nanomaterials. Here, we highlight omics and systems biology studies in nanotoxicology, aiming towards the implementation of a systems nanotoxicology and mechanism-based risk assessment of nanomaterials. - Highlights: • Systems nanotoxicology is a multi-disciplinary approach to quantitative modelling. • Transcriptomics, proteomics and metabolomics remain the most common methods. • Global “omics” techniques should be coupled to computational modelling approaches. • The discovery of nano-specific toxicity pathways and biomarkers is a prioritized goal. • Overall, experimental nanosafety research must endeavour reproducibility and relevance.« less

  10. The use of GIS and modelling approaches in squirrel population management and conservation: A review

    Treesearch

    P. W. W. Lurz; J. L. Koprowski; D. J. A. Wood

    2008-01-01

    We review modelling approaches in relation to three key areas of sciurid ecology: management, disease risk assessments and conservation. Models enable us to explore different scenarios to develop effective management and conservation strategies. They may also assist in identifying and targeting research needs for tree and flying squirrels. However, there is a need to...

  11. ‘Survival’: a simulation toolkit introducing a modular approach for radiobiological evaluations in ion beam therapy

    NASA Astrophysics Data System (ADS)

    Manganaro, L.; Russo, G.; Bourhaleb, F.; Fausti, F.; Giordanengo, S.; Monaco, V.; Sacchi, R.; Vignati, A.; Cirio, R.; Attili, A.

    2018-04-01

    One major rationale for the application of heavy ion beams in tumour therapy is their increased relative biological effectiveness (RBE). The complex dependencies of the RBE on dose, biological endpoint, position in the field etc require the use of biophysical models in treatment planning and clinical analysis. This study aims to introduce a new software, named ‘Survival’, to facilitate the radiobiological computations needed in ion therapy. The simulation toolkit was written in C++ and it was developed with a modular architecture in order to easily incorporate different radiobiological models. The following models were successfully implemented: the local effect model (LEM, version I, II and III) and variants of the microdosimetric-kinetic model (MKM). Different numerical evaluation approaches were also implemented: Monte Carlo (MC) numerical methods and a set of faster analytical approximations. Among the possible applications, the toolkit was used to reproduce the RBE versus LET for different ions (proton, He, C, O, Ne) and different cell lines (CHO, HSG). Intercomparison between different models (LEM and MKM) and computational approaches (MC and fast approximations) were performed. The developed software could represent an important tool for the evaluation of the biological effectiveness of charged particles in ion beam therapy, in particular when coupled with treatment simulations. Its modular architecture facilitates benchmarking and inter-comparison between different models and evaluation approaches. The code is open source (GPL2 license) and available at https://github.com/batuff/Survival.

  12. An objective Bayesian analysis of a crossover design via model selection and model averaging.

    PubMed

    Li, Dandan; Sivaganesan, Siva

    2016-11-10

    Inference about the treatment effect in a crossover design has received much attention over time owing to the uncertainty in the existence of the carryover effect and its impact on the estimation of the treatment effect. Adding to this uncertainty is that the existence of the carryover effect and its size may depend on the presence of the treatment effect and its size. We consider estimation and testing hypothesis about the treatment effect in a two-period crossover design, assuming normally distributed response variable, and use an objective Bayesian approach to test the hypothesis about the treatment effect and to estimate its size when it exists while accounting for the uncertainty about the presence of the carryover effect as well as the treatment and period effects. We evaluate and compare the performance of the proposed approach with a standard frequentist approach using simulated data, and real data. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  13. Modeling and Measurement Constraints in Fault Diagnostics for HVAC Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Najafi, Massieh; Auslander, David M.; Bartlett, Peter L.

    2010-05-30

    Many studies have shown that energy savings of five to fifteen percent are achievable in commercial buildings by detecting and correcting building faults, and optimizing building control systems. However, in spite of good progress in developing tools for determining HVAC diagnostics, methods to detect faults in HVAC systems are still generally undeveloped. Most approaches use numerical filtering or parameter estimation methods to compare data from energy meters and building sensors to predictions from mathematical or statistical models. They are effective when models are relatively accurate and data contain few errors. In this paper, we address the case where models aremore » imperfect and data are variable, uncertain, and can contain error. We apply a Bayesian updating approach that is systematic in managing and accounting for most forms of model and data errors. The proposed method uses both knowledge of first principle modeling and empirical results to analyze the system performance within the boundaries defined by practical constraints. We demonstrate the approach by detecting faults in commercial building air handling units. We find that the limitations that exist in air handling unit diagnostics due to practical constraints can generally be effectively addressed through the proposed approach.« less

  14. A joint modeling and estimation method for multivariate longitudinal data with mixed types of responses to analyze physical activity data generated by accelerometers.

    PubMed

    Li, Haocheng; Zhang, Yukun; Carroll, Raymond J; Keadle, Sarah Kozey; Sampson, Joshua N; Matthews, Charles E

    2017-11-10

    A mixed effect model is proposed to jointly analyze multivariate longitudinal data with continuous, proportion, count, and binary responses. The association of the variables is modeled through the correlation of random effects. We use a quasi-likelihood type approximation for nonlinear variables and transform the proposed model into a multivariate linear mixed model framework for estimation and inference. Via an extension to the EM approach, an efficient algorithm is developed to fit the model. The method is applied to physical activity data, which uses a wearable accelerometer device to measure daily movement and energy expenditure information. Our approach is also evaluated by a simulation study. Copyright © 2017 John Wiley & Sons, Ltd.

  15. Quantitative analysis of drug effects at the whole-body level: a case study for glucose metabolism in malaria patients.

    PubMed

    Snoep, Jacky L; Green, Kathleen; Eicher, Johann; Palm, Daniel C; Penkler, Gerald; du Toit, Francois; Walters, Nicolas; Burger, Robert; Westerhoff, Hans V; van Niekerk, David D

    2015-12-01

    We propose a hierarchical modelling approach to construct models for disease states at the whole-body level. Such models can simulate effects of drug-induced inhibition of reaction steps on the whole-body physiology. We illustrate the approach for glucose metabolism in malaria patients, by merging two detailed kinetic models for glucose metabolism in the parasite Plasmodium falciparum and the human red blood cell with a coarse-grained model for whole-body glucose metabolism. In addition we use a genome-scale metabolic model for the parasite to predict amino acid production profiles by the malaria parasite that can be used as a complex biomarker. © 2015 Authors; published by Portland Press Limited.

  16. Teacher challenges, perceptions, and use of science models in middle school classrooms about climate, weather, and energy concepts

    NASA Astrophysics Data System (ADS)

    Yarker, Morgan Brown

    Research suggests that scientific models and modeling should be topics covered in K-12 classrooms as part of a comprehensive science curriculum. It is especially important when talking about topics in weather and climate, where computer and forecast models are the center of attention. There are several approaches to model based inquiry, but it can be argued, theoretically, that science models can be effectively implemented into any approach to inquiry if they are utilized appropriately. Yet, it remains to be explored how science models are actually implemented in classrooms. This study qualitatively looks at three middle school science teachers' use of science models with various approaches to inquiry during their weather and climate units. Results indicate that the teacher who used the most elements of inquiry used models in a way that aligned best with the theoretical framework than the teachers who used fewer elements of inquiry. The theoretical framework compares an approach to argument-based inquiry to model-based inquiry, which argues that the approaches are essentially identical, so teachers who use inquiry should be able to apply model-based inquiry using the same approach. However, none of the teachers in this study had a complete understanding of the role models play in authentic science inquiry, therefore students were not explicitly exposed to the ideas that models can be used to make predictions about, and are representations of, a natural phenomenon. Rather, models were explicitly used to explain concepts to students or have students explain concepts to the teacher or to each other. Additionally, models were used as a focal point for conversation between students, usually as they were creating, modifying, or using models. Teachers were not observed asking students to evaluate models. Since science models are an important aspect of understanding science, it is important that teachers not only know how to implement models into an inquiry environment, but also understand the characteristics of science models so that they can explicitly teach the concept of modeling to students. This study suggests that better pre-service and in-service teacher education is needed to prepare students to teach about science models effectively.

  17. How to resolve the SLOSS debate: lessons from species-diversity models.

    PubMed

    Tjørve, Even

    2010-05-21

    The SLOSS debate--whether a single large reserve will conserve more species than several small--of the 1970s and 1980s never came to a resolution. The first rule of reserve design states that one large reserve will conserve the most species, a rule which has been heavily contested. Empirical data seem to undermine the reliance on general rules, indicating that the best strategy varies from case to case. Modeling has also been deployed in this debate. We may divide the modeling approaches to the SLOSS enigma into dynamic and static approaches. Dynamic approaches, covered by the fields of island equilibrium theory of island biogeography and metapopulation theory, look at immigration, emigration, and extinction. Static approaches, such as the one in this paper, illustrate how several factors affect the number of reserves that will save the most species. This article approaches the effect of different factors by the application of species-diversity models. These models combine species-area curves for two or more reserves, correcting for the species overlap between them. Such models generate several predictions on how different factors affect the optimal number of reserves. The main predictions are: Fewer and larger reserves are favored by increased species overlap between reserves, by faster growth in number of species with reserve area increase, by higher minimum-area requirements, by spatial aggregation and by uneven species abundances. The effect of increased distance between smaller reserves depends on the two counteracting factors: decreased species density caused by isolation (which enhances minimum-area effect) and decreased overlap between isolates. The first decreases the optimal number of reserves; the second increases the optimal number. The effect of total reserve-system area depends both on the shape of the species-area curve and on whether overlap between reserves changes with scale. The approach to modeling presented here has several implications for conservational strategies. It illustrates well how the SLOSS enigma can be reduced to a question of the shape of the species-area curve that is expected or generated from reserves of different sizes and a question of overlap between isolates (or reserves). Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  18. Application of System Operational Effectiveness Methodology to Space Launch Vehicle Development and Operations

    NASA Technical Reports Server (NTRS)

    Watson, Michael D.; Kelley, Gary W.

    2012-01-01

    The Department of Defense (DoD) defined System Operational Effectiveness (SOE) model provides an exceptional framework for an affordable approach to the development and operation of space launch vehicles and their supporting infrastructure. The SOE model provides a focal point from which to direct and measure technical effectiveness and process efficiencies of space launch vehicles. The application of the SOE model to a space launch vehicle's development and operation effort leads to very specific approaches and measures that require consideration during the design phase. This paper provides a mapping of the SOE model to the development of space launch vehicles for human exploration by addressing the SOE model key points of measurement including System Performance, System Availability, Technical Effectiveness, Process Efficiency, System Effectiveness, Life Cycle Cost, and Affordable Operational Effectiveness. In addition, the application of the SOE model to the launch vehicle development process is defined providing the unique aspects of space launch vehicle production and operations in lieu of the traditional broader SOE context that examines large quantities of fielded systems. The tailoring and application of the SOE model to space launch vehicles provides some key insights into the operational design drivers, capability phasing, and operational support systems.

  19. Dynamic simulation modelling of policy responses to reduce alcohol-related harms: rationale and procedure for a participatory approach.

    PubMed

    Atkinson, Jo-An; O'Donnell, Eloise; Wiggers, John; McDonnell, Geoff; Mitchell, Jo; Freebairn, Louise; Indig, Devon; Rychetnik, Lucie

    2017-02-15

    Development of effective policy responses to address complex public health problems can be challenged by a lack of clarity about the interaction of risk factors driving the problem, differing views of stakeholders on the most appropriate and effective intervention approaches, a lack of evidence to support commonly implemented and acceptable intervention approaches, and a lack of acceptance of effective interventions. Consequently, political considerations, community advocacy and industry lobbying can contribute to a hotly contested debate about the most appropriate course of action; this can hinder consensus and give rise to policy resistance. The problem of alcohol misuse and its associated harms in New South Wales (NSW), Australia, provides a relevant example of such challenges. Dynamic simulation modelling is increasingly being valued by the health sector as a robust tool to support decision making to address complex problems. It allows policy makers to ask 'what-if' questions and test the potential impacts of different policy scenarios over time, before solutions are implemented in the real world. Participatory approaches to modelling enable researchers, policy makers, program planners, practitioners and consumer representatives to collaborate with expert modellers to ensure that models are transparent, incorporate diverse evidence and perspectives, are better aligned to the decision-support needs of policy makers, and can facilitate consensus building for action. This paper outlines a procedure for embedding stakeholder engagement and consensus building in the development of dynamic simulation models that can guide the development of effective, coordinated and acceptable policy responses to complex public health problems, such as alcohol-related harms in NSW.

  20. Approaches to modeling landscape-scale drought-induced forest mortality

    USGS Publications Warehouse

    Gustafson, Eric J.; Shinneman, Douglas

    2015-01-01

    Drought stress is an important cause of tree mortality in forests, and drought-induced disturbance events are projected to become more common in the future due to climate change. Landscape Disturbance and Succession Models (LDSM) are becoming widely used to project climate change impacts on forests, including potential interactions with natural and anthropogenic disturbances, and to explore the efficacy of alternative management actions to mitigate negative consequences of global changes on forests and ecosystem services. Recent studies incorporating drought-mortality effects into LDSMs have projected significant potential changes in forest composition and carbon storage, largely due to differential impacts of drought on tree species and interactions with other disturbance agents. In this chapter, we review how drought affects forest ecosystems and the different ways drought effects have been modeled (both spatially and aspatially) in the past. Building on those efforts, we describe several approaches to modeling drought effects in LDSMs, discuss advantages and shortcomings of each, and include two case studies for illustration. The first approach features the use of empirically derived relationships between measures of drought and the loss of tree biomass to drought-induced mortality. The second uses deterministic rules of species mortality for given drought events to project changes in species composition and forest distribution. A third approach is more mechanistic, simulating growth reductions and death caused by water stress. Because modeling of drought effects in LDSMs is still in its infancy, and because drought is expected to play an increasingly important role in forest health, further development of modeling drought-forest dynamics is urgently needed.

  1. Systematic review and overview of health economic evaluation models in obesity prevention and therapy.

    PubMed

    Schwander, Bjoern; Hiligsmann, Mickaël; Nuijten, Mark; Evers, Silvia

    2016-10-01

    Given the increasing clinical and economic burden of obesity, it is of major importance to identify cost-effective approaches for obesity management. Areas covered: This study aims to systematically review and compile an overview of published decision models for health economic assessments (HEA) in obesity, in order to summarize and compare their key characteristics as well as to identify, inform and guide future research. Of the 4,293 abstracts identified, 87 papers met our inclusion criteria. A wide range of different methodological approaches have been identified. Of the 87 papers, 69 (79%) applied unique /distinctive modelling approaches. Expert commentary: This wide range of approaches suggests the need to develop recommendations /minimal requirements for model-based HEA of obesity. In order to reach this long-term goal, further research is required. Valuable future research steps would be to investigate the predictiveness, validity and quality of the identified modelling approaches.

  2. Simulation of Ultra-Small MOSFETs Using a 2-D Quantum-Corrected Drift-Diffusion Model

    NASA Technical Reports Server (NTRS)

    Biegel, Bryan A.; Rafferty, Conor S.; Yu, Zhiping; Dutton, Robert W.; Ancona, Mario G.; Saini, Subhash (Technical Monitor)

    1998-01-01

    We describe an electronic transport model and an implementation approach that respond to the challenges of device modeling for gigascale integration. We use the density-gradient (DG) transport model, which adds tunneling and quantum smoothing of carrier density profiles to the drift-diffusion model. We present the current implementation of the DG model in PROPHET, a partial differential equation solver developed by Lucent Technologies. This implementation approach permits rapid development and enhancement of models, as well as run-time modifications and model switching. We show that even in typical bulk transport devices such as P-N diodes and BJTs, DG quantum effects can significantly modify the I-V characteristics. Quantum effects are shown to be even more significant in small, surface transport devices, such as sub-0.1 micron MOSFETs. In thin-oxide MOS capacitors, we find that quantum effects may reduce gate capacitance by 25% or more. The inclusion of quantum effects in simulations dramatically improves the match between C-V simulations and measurements. Significant quantum corrections also occur in the I-V characteristics of short-channel MOSFETs due to the gate capacitance correction.

  3. General Framework for Effect Sizes in Cluster Randomized Experiments

    ERIC Educational Resources Information Center

    VanHoudnos, Nathan

    2016-01-01

    Cluster randomized experiments are ubiquitous in modern education research. Although a variety of modeling approaches are used to analyze these data, perhaps the most common methodology is a normal mixed effects model where some effects, such as the treatment effect, are regarded as fixed, and others, such as the effect of group random assignment…

  4. Modelling heterogeneity variances in multiple treatment comparison meta-analysis--are informative priors the better solution?

    PubMed

    Thorlund, Kristian; Thabane, Lehana; Mills, Edward J

    2013-01-11

    Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the 'common variance' assumption). This approach 'borrows strength' for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment rankings were similar among the novel approaches, but considerably different when compared with the homogenous variance approach. MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice.

  5. Degradation modeling of high temperature proton exchange membrane fuel cells using dual time scale simulation

    NASA Astrophysics Data System (ADS)

    Pohl, E.; Maximini, M.; Bauschulte, A.; vom Schloß, J.; Hermanns, R. T. E.

    2015-02-01

    HT-PEM fuel cells suffer from performance losses due to degradation effects. Therefore, the durability of HT-PEM is currently an important factor of research and development. In this paper a novel approach is presented for an integrated short term and long term simulation of HT-PEM accelerated lifetime testing. The physical phenomena of short term and long term effects are commonly modeled separately due to the different time scales. However, in accelerated lifetime testing, long term degradation effects have a crucial impact on the short term dynamics. Our approach addresses this problem by applying a novel method for dual time scale simulation. A transient system simulation is performed for an open voltage cycle test on a HT-PEM fuel cell for a physical time of 35 days. The analysis describes the system dynamics by numerical electrochemical impedance spectroscopy. Furthermore, a performance assessment is performed in order to demonstrate the efficiency of the approach. The presented approach reduces the simulation time by approximately 73% compared to conventional simulation approach without losing too much accuracy. The approach promises a comprehensive perspective considering short term dynamic behavior and long term degradation effects.

  6. Robustness of Value-Added Analysis of School Effectiveness. Research Report. ETS RR-08-22

    ERIC Educational Resources Information Center

    Braun, Henry; Qu, Yanxuan

    2008-01-01

    This paper reports on a study conducted to investigate the consistency of the results between 2 approaches to estimating school effectiveness through value-added modeling. Estimates of school effects from the layered model employing item response theory (IRT) scaled data are compared to estimates derived from a discrete growth model based on the…

  7. Testing Mediation in Structural Equation Modeling: The Effectiveness of the Test of Joint Significance

    ERIC Educational Resources Information Center

    Leth-Steensen, Craig; Gallitto, Elena

    2016-01-01

    A large number of approaches have been proposed for estimating and testing the significance of indirect effects in mediation models. In this study, four sets of Monte Carlo simulations involving full latent variable structural equation models were run in order to contrast the effectiveness of the currently popular bias-corrected bootstrapping…

  8. Dynamic multipopulation and density dependent evolutionary games related to replicator dynamics. A metasimplex concept.

    PubMed

    Argasinski, Krzysztof

    2006-07-01

    This paper contains the basic extensions of classical evolutionary games (multipopulation and density dependent models). It is shown that classical bimatrix approach is inconsistent with other approaches because it does not depend on proportion between populations. The main conclusion is that interspecific proportion parameter is important and must be considered in multipopulation models. The paper provides a synthesis of both extensions (a metasimplex concept) which solves the problem intrinsic in the bimatrix model. It allows us to model interactions among any number of subpopulations including density dependence effects. We prove that all modern approaches to evolutionary games are closely related. All evolutionary models (except classical bimatrix approaches) can be reduced to a single population general model by a simple change of variables. Differences between classic bimatrix evolutionary games and a new model which is dependent on interspecific proportion are shown by examples.

  9. Inverse odds ratio-weighted estimation for causal mediation analysis.

    PubMed

    Tchetgen Tchetgen, Eric J

    2013-11-20

    An important scientific goal of studies in the health and social sciences is increasingly to determine to what extent the total effect of a point exposure is mediated by an intermediate variable on the causal pathway between the exposure and the outcome. A causal framework has recently been proposed for mediation analysis, which gives rise to new definitions, formal identification results and novel estimators of direct and indirect effects. In the present paper, the author describes a new inverse odds ratio-weighted approach to estimate so-called natural direct and indirect effects. The approach, which uses as a weight the inverse of an estimate of the odds ratio function relating the exposure and the mediator, is universal in that it can be used to decompose total effects in a number of regression models commonly used in practice. Specifically, the approach may be used for effect decomposition in generalized linear models with a nonlinear link function, and in a number of other commonly used models such as the Cox proportional hazards regression for a survival outcome. The approach is simple and can be implemented in standard software provided a weight can be specified for each observation. An additional advantage of the method is that it easily incorporates multiple mediators of a categorical, discrete or continuous nature. Copyright © 2013 John Wiley & Sons, Ltd.

  10. An Active Learning Approach to Teach Advanced Multi-Predictor Modeling Concepts to Clinicians

    ERIC Educational Resources Information Center

    Samsa, Gregory P.; Thomas, Laine; Lee, Linda S.; Neal, Edward M.

    2012-01-01

    Clinicians have characteristics--high scientific maturity, low tolerance for symbol manipulation and programming, limited time outside of class--that limit the effectiveness of traditional methods for teaching multi-predictor modeling. We describe an active-learning based approach that shows particular promise for accommodating these…

  11. A Mixed Learning Approach in Mechatronics Education

    ERIC Educational Resources Information Center

    Yilmaz, O.; Tuncalp, K.

    2011-01-01

    This study aims to investigate the effect of a Web-based mixed learning approach model on mechatronics education. The model combines different perception methods such as reading, listening, and speaking and practice methods developed in accordance with the vocational background of students enrolled in the course Electromechanical Systems in…

  12. Fish population modeling approaches for assessing direct effects and recovery following mitigation of a pulp mill effluent in Jackfish Bay

    EPA Science Inventory

    We present an approach to link chemically-induced alterations in molecular and biochemical endpoints to adverse outcomes in whole organisms and populations. A predictive population model was developed to translate changes in fecundity measures of white sucker (Catostomus commers...

  13. CES--Cultural, Experiential, Skill Building: The Cognitive Foundation.

    ERIC Educational Resources Information Center

    Rheams, Annie E.; Gallagher, Maureen

    1995-01-01

    Critiques the assimilation strategy and the hero-heroine-ritual approach to multicultural education, and offers a third model, the Cultural, Experiential, Skill Building (CES) approach, as an alternative for teacher training. Effects of the CES model on potential teachers and the implications for teacher training are addressed. (GR)

  14. Differences in Mortality among Heroin, Cocaine, and Methamphetamine Users: A Hierarchical Bayesian Approach

    PubMed Central

    Liang, Li-Jung; Huang, David; Brecht, Mary-Lynn; Hser, Yih-ing

    2010-01-01

    Studies examining differences in mortality among long-term drug users have been limited. In this paper, we introduce a Bayesian framework that jointly models survival data using a Weibull proportional hazard model with frailty, and substance and alcohol data using mixed-effects models, to examine differences in mortality among heroin, cocaine, and methamphetamine users from five long-term follow-up studies. The traditional approach to analyzing combined survival data from numerous studies assumes that the studies are homogeneous, thus the estimates may be biased due to unobserved heterogeneity among studies. Our approach allows us to structurally combine the data from different studies while accounting for correlation among subjects within each study. Markov chain Monte Carlo facilitates the implementation of Bayesian analyses. Despite the complexity of the model, our approach is relatively straightforward to implement using WinBUGS. We demonstrate our joint modeling approach to the combined data and discuss the results from both approaches. PMID:21052518

  15. Developing a local least-squares support vector machines-based neuro-fuzzy model for nonlinear and chaotic time series prediction.

    PubMed

    Miranian, A; Abdollahzade, M

    2013-02-01

    Local modeling approaches, owing to their ability to model different operating regimes of nonlinear systems and processes by independent local models, seem appealing for modeling, identification, and prediction applications. In this paper, we propose a local neuro-fuzzy (LNF) approach based on the least-squares support vector machines (LSSVMs). The proposed LNF approach employs LSSVMs, which are powerful in modeling and predicting time series, as local models and uses hierarchical binary tree (HBT) learning algorithm for fast and efficient estimation of its parameters. The HBT algorithm heuristically partitions the input space into smaller subdomains by axis-orthogonal splits. In each partitioning, the validity functions automatically form a unity partition and therefore normalization side effects, e.g., reactivation, are prevented. Integration of LSSVMs into the LNF network as local models, along with the HBT learning algorithm, yield a high-performance approach for modeling and prediction of complex nonlinear time series. The proposed approach is applied to modeling and predictions of different nonlinear and chaotic real-world and hand-designed systems and time series. Analysis of the prediction results and comparisons with recent and old studies demonstrate the promising performance of the proposed LNF approach with the HBT learning algorithm for modeling and prediction of nonlinear and chaotic systems and time series.

  16. Propensity score analysis with partially observed covariates: How should multiple imputation be used?

    PubMed

    Leyrat, Clémence; Seaman, Shaun R; White, Ian R; Douglas, Ian; Smeeth, Liam; Kim, Joseph; Resche-Rigon, Matthieu; Carpenter, James R; Williamson, Elizabeth J

    2017-01-01

    Inverse probability of treatment weighting is a popular propensity score-based approach to estimate marginal treatment effects in observational studies at risk of confounding bias. A major issue when estimating the propensity score is the presence of partially observed covariates. Multiple imputation is a natural approach to handle missing data on covariates: covariates are imputed and a propensity score analysis is performed in each imputed dataset to estimate the treatment effect. The treatment effect estimates from each imputed dataset are then combined to obtain an overall estimate. We call this method MIte. However, an alternative approach has been proposed, in which the propensity scores are combined across the imputed datasets (MIps). Therefore, there are remaining uncertainties about how to implement multiple imputation for propensity score analysis: (a) should we apply Rubin's rules to the inverse probability of treatment weighting treatment effect estimates or to the propensity score estimates themselves? (b) does the outcome have to be included in the imputation model? (c) how should we estimate the variance of the inverse probability of treatment weighting estimator after multiple imputation? We studied the consistency and balancing properties of the MIte and MIps estimators and performed a simulation study to empirically assess their performance for the analysis of a binary outcome. We also compared the performance of these methods to complete case analysis and the missingness pattern approach, which uses a different propensity score model for each pattern of missingness, and a third multiple imputation approach in which the propensity score parameters are combined rather than the propensity scores themselves (MIpar). Under a missing at random mechanism, complete case and missingness pattern analyses were biased in most cases for estimating the marginal treatment effect, whereas multiple imputation approaches were approximately unbiased as long as the outcome was included in the imputation model. Only MIte was unbiased in all the studied scenarios and Rubin's rules provided good variance estimates for MIte. The propensity score estimated in the MIte approach showed good balancing properties. In conclusion, when using multiple imputation in the inverse probability of treatment weighting context, MIte with the outcome included in the imputation model is the preferred approach.

  17. Logistic random effects regression models: a comparison of statistical packages for binary and ordinal outcomes.

    PubMed

    Li, Baoyue; Lingsma, Hester F; Steyerberg, Ewout W; Lesaffre, Emmanuel

    2011-05-23

    Logistic random effects models are a popular tool to analyze multilevel also called hierarchical data with a binary or ordinal outcome. Here, we aim to compare different statistical software implementations of these models. We used individual patient data from 8509 patients in 231 centers with moderate and severe Traumatic Brain Injury (TBI) enrolled in eight Randomized Controlled Trials (RCTs) and three observational studies. We fitted logistic random effects regression models with the 5-point Glasgow Outcome Scale (GOS) as outcome, both dichotomized as well as ordinal, with center and/or trial as random effects, and as covariates age, motor score, pupil reactivity or trial. We then compared the implementations of frequentist and Bayesian methods to estimate the fixed and random effects. Frequentist approaches included R (lme4), Stata (GLLAMM), SAS (GLIMMIX and NLMIXED), MLwiN ([R]IGLS) and MIXOR, Bayesian approaches included WinBUGS, MLwiN (MCMC), R package MCMCglmm and SAS experimental procedure MCMC.Three data sets (the full data set and two sub-datasets) were analysed using basically two logistic random effects models with either one random effect for the center or two random effects for center and trial. For the ordinal outcome in the full data set also a proportional odds model with a random center effect was fitted. The packages gave similar parameter estimates for both the fixed and random effects and for the binary (and ordinal) models for the main study and when based on a relatively large number of level-1 (patient level) data compared to the number of level-2 (hospital level) data. However, when based on relatively sparse data set, i.e. when the numbers of level-1 and level-2 data units were about the same, the frequentist and Bayesian approaches showed somewhat different results. The software implementations differ considerably in flexibility, computation time, and usability. There are also differences in the availability of additional tools for model evaluation, such as diagnostic plots. The experimental SAS (version 9.2) procedure MCMC appeared to be inefficient. On relatively large data sets, the different software implementations of logistic random effects regression models produced similar results. Thus, for a large data set there seems to be no explicit preference (of course if there is no preference from a philosophical point of view) for either a frequentist or Bayesian approach (if based on vague priors). The choice for a particular implementation may largely depend on the desired flexibility, and the usability of the package. For small data sets the random effects variances are difficult to estimate. In the frequentist approaches the MLE of this variance was often estimated zero with a standard error that is either zero or could not be determined, while for Bayesian methods the estimates could depend on the chosen "non-informative" prior of the variance parameter. The starting value for the variance parameter may be also critical for the convergence of the Markov chain.

  18. Adaptive Weibull Multiplicative Model and Multilayer Perceptron Neural Networks for Dark-Spot Detection from SAR Imagery

    PubMed Central

    Taravat, Alireza; Oppelt, Natascha

    2014-01-01

    Oil spills represent a major threat to ocean ecosystems and their environmental status. Previous studies have shown that Synthetic Aperture Radar (SAR), as its recording is independent of clouds and weather, can be effectively used for the detection and classification of oil spills. Dark formation detection is the first and critical stage in oil-spill detection procedures. In this paper, a novel approach for automated dark-spot detection in SAR imagery is presented. A new approach from the combination of adaptive Weibull Multiplicative Model (WMM) and MultiLayer Perceptron (MLP) neural networks is proposed to differentiate between dark spots and the background. The results have been compared with the results of a model combining non-adaptive WMM and pulse coupled neural networks. The presented approach overcomes the non-adaptive WMM filter setting parameters by developing an adaptive WMM model which is a step ahead towards a full automatic dark spot detection. The proposed approach was tested on 60 ENVISAT and ERS2 images which contained dark spots. For the overall dataset, an average accuracy of 94.65% was obtained. Our experimental results demonstrate that the proposed approach is very robust and effective where the non-adaptive WMM & pulse coupled neural network (PCNN) model generates poor accuracies. PMID:25474376

  19. A Two-Step Approach for Analysis of Nonignorable Missing Outcomes in Longitudinal Regression: an Application to Upstate KIDS Study.

    PubMed

    Liu, Danping; Yeung, Edwina H; McLain, Alexander C; Xie, Yunlong; Buck Louis, Germaine M; Sundaram, Rajeshwari

    2017-09-01

    Imperfect follow-up in longitudinal studies commonly leads to missing outcome data that can potentially bias the inference when the missingness is nonignorable; that is, the propensity of missingness depends on missing values in the data. In the Upstate KIDS Study, we seek to determine if the missingness of child development outcomes is nonignorable, and how a simple model assuming ignorable missingness would compare with more complicated models for a nonignorable mechanism. To correct for nonignorable missingness, the shared random effects model (SREM) jointly models the outcome and the missing mechanism. However, the computational complexity and lack of software packages has limited its practical applications. This paper proposes a novel two-step approach to handle nonignorable missing outcomes in generalized linear mixed models. We first analyse the missing mechanism with a generalized linear mixed model and predict values of the random effects; then, the outcome model is fitted adjusting for the predicted random effects to account for heterogeneity in the missingness propensity. Extensive simulation studies suggest that the proposed method is a reliable approximation to SREM, with a much faster computation. The nonignorability of missing data in the Upstate KIDS Study is estimated to be mild to moderate, and the analyses using the two-step approach or SREM are similar to the model assuming ignorable missingness. The two-step approach is a computationally straightforward method that can be conducted as sensitivity analyses in longitudinal studies to examine violations to the ignorable missingness assumption and the implications relative to health outcomes. © 2017 John Wiley & Sons Ltd.

  20. A Bayesian approach for estimating under-reported dengue incidence with a focus on non-linear associations between climate and dengue in Dhaka, Bangladesh.

    PubMed

    Sharmin, Sifat; Glass, Kathryn; Viennet, Elvina; Harley, David

    2018-04-01

    Determining the relation between climate and dengue incidence is challenging due to under-reporting of disease and consequent biased incidence estimates. Non-linear associations between climate and incidence compound this. Here, we introduce a modelling framework to estimate dengue incidence from passive surveillance data while incorporating non-linear climate effects. We estimated the true number of cases per month using a Bayesian generalised linear model, developed in stages to adjust for under-reporting. A semi-parametric thin-plate spline approach was used to quantify non-linear climate effects. The approach was applied to data collected from the national dengue surveillance system of Bangladesh. The model estimated that only 2.8% (95% credible interval 2.7-2.8) of all cases in the capital Dhaka were reported through passive case reporting. The optimal mean monthly temperature for dengue transmission is 29℃ and average monthly rainfall above 15 mm decreases transmission. Our approach provides an estimate of true incidence and an understanding of the effects of temperature and rainfall on dengue transmission in Dhaka, Bangladesh.

  1. Sensitivity of Above-Ground Biomass Estimates to Height-Diameter Modelling in Mixed-Species West African Woodlands

    PubMed Central

    Aynekulu, Ermias; Pitkänen, Sari; Packalen, Petteri

    2016-01-01

    It has been suggested that above-ground biomass (AGB) inventories should include tree height (H), in addition to diameter (D). As H is a difficult variable to measure, H-D models are commonly used to predict H. We tested a number of approaches for H-D modelling, including additive terms which increased the complexity of the model, and observed how differences in tree-level predictions of H propagated to plot-level AGB estimations. We were especially interested in detecting whether the choice of method can lead to bias. The compared approaches listed in the order of increasing complexity were: (B0) AGB estimations from D-only; (B1) involving also H obtained from a fixed-effects H-D model; (B2) involving also species; (B3) including also between-plot variability as random effects; and (B4) involving multilevel nested random effects for grouping plots in clusters. In light of the results, the modelling approach affected the AGB estimation significantly in some cases, although differences were negligible for some of the alternatives. The most important differences were found between including H or not in the AGB estimation. We observed that AGB predictions without H information were very sensitive to the environmental stress parameter (E), which can induce a critical bias. Regarding the H-D modelling, the most relevant effect was found when species was included as an additive term. We presented a two-step methodology, which succeeded in identifying the species for which the general H-D relation was relevant to modify. Based on the results, our final choice was the single-level mixed-effects model (B3), which accounts for the species but also for the plot random effects reflecting site-specific factors such as soil properties and degree of disturbance. PMID:27367857

  2. Simulating canopy temperature for modelling heat stress in cereals

    USDA-ARS?s Scientific Manuscript database

    Crop models must be improved to account for the large effects of heat stress effects on crop yields. To date, most approaches in crop models use air temperature despite evidence that crop canopy temperature better explains yield reductions associated with high temperature events. This study presents...

  3. Survey of DoD Profit Policy and Further Analysis of the Estimation Theory

    DTIC Science & Technology

    1999-12-01

    CAPITAL ASSET PRICING MODEL 21 E. APPLICATION OF THE CAPM TO WEIGHTED TO THE WEIGHTED GUIDLELINES POLICY 24 1. Pure...Working Capital Employed : 9 4. Facilities Capital 11 C. EFFECTIVENESS OF POLICY 12 III. CAPTIAL ASSET PRICING MODEL OF DOD PROFIT 19 A. OVERVIEW 19...and Rogerson’s approach to the weighted guidelines policy using a capital asset pricing model approach. Both models are examined in the

  4. Multiple-Input Subject-Specific Modeling of Plasma Glucose Concentration for Feedforward Control.

    PubMed

    Kotz, Kaylee; Cinar, Ali; Mei, Yong; Roggendorf, Amy; Littlejohn, Elizabeth; Quinn, Laurie; Rollins, Derrick K

    2014-11-26

    The ability to accurately develop subject-specific, input causation models, for blood glucose concentration (BGC) for large input sets can have a significant impact on tightening control for insulin dependent diabetes. More specifically, for Type 1 diabetics (T1Ds), it can lead to an effective artificial pancreas (i.e., an automatic control system that delivers exogenous insulin) under extreme changes in critical disturbances. These disturbances include food consumption, activity variations, and physiological stress changes. Thus, this paper presents a free-living, outpatient, multiple-input, modeling method for BGC with strong causation attributes that is stable and guards against overfitting to provide an effective modeling approach for feedforward control (FFC). This approach is a Wiener block-oriented methodology, which has unique attributes for meeting critical requirements for effective, long-term, FFC.

  5. Modeling foreign exchange market activity around macroeconomic news: Hawkes-process approach.

    PubMed

    Rambaldi, Marcello; Pennesi, Paris; Lillo, Fabrizio

    2015-01-01

    We present a Hawkes-model approach to the foreign exchange market in which the high-frequency price dynamics is affected by a self-exciting mechanism and an exogenous component, generated by the pre-announced arrival of macroeconomic news. By focusing on time windows around the news announcement, we find that the model is able to capture the increase of trading activity after the news, both when the news has a sizable effect on volatility and when this effect is negligible, either because the news in not important or because the announcement is in line with the forecast by analysts. We extend the model by considering noncausal effects, due to the fact that the existence of the news (but not its content) is known by the market before the announcement.

  6. Modeling foreign exchange market activity around macroeconomic news: Hawkes-process approach

    NASA Astrophysics Data System (ADS)

    Rambaldi, Marcello; Pennesi, Paris; Lillo, Fabrizio

    2015-01-01

    We present a Hawkes-model approach to the foreign exchange market in which the high-frequency price dynamics is affected by a self-exciting mechanism and an exogenous component, generated by the pre-announced arrival of macroeconomic news. By focusing on time windows around the news announcement, we find that the model is able to capture the increase of trading activity after the news, both when the news has a sizable effect on volatility and when this effect is negligible, either because the news in not important or because the announcement is in line with the forecast by analysts. We extend the model by considering noncausal effects, due to the fact that the existence of the news (but not its content) is known by the market before the announcement.

  7. Population modeling for pesticide risk assessment of threatened species-A case study of a terrestrial plant, Boltonia decurrens.

    PubMed

    Schmolke, Amelie; Brain, Richard; Thorbek, Pernille; Perkins, Daniel; Forbes, Valery

    2017-02-01

    Although population models are recognized as necessary tools in the ecological risk assessment of pesticides, particularly for species listed under the Endangered Species Act, their application in this context is currently limited to very few cases. The authors developed a detailed, individual-based population model for a threatened plant species, the decurrent false aster (Boltonia decurrens), for application in pesticide risk assessment. Floods and competition with other plant species are known factors that drive the species' population dynamics and were included in the model approach. The authors use the model to compare the population-level effects of 5 toxicity surrogates applied to B. decurrens under varying environmental conditions. The model results suggest that the environmental conditions under which herbicide applications occur may have a higher impact on populations than organism-level sensitivities to an herbicide within a realistic range. Indirect effects may be as important as the direct effects of herbicide applications by shifting competition strength if competing species have different sensitivities to the herbicide. The model approach provides a case study for population-level risk assessments of listed species. Population-level effects of herbicides can be assessed in a realistic and species-specific context, and uncertainties can be addressed explicitly. The authors discuss how their approach can inform the future development and application of modeling for population-level risk assessments of listed species, and ecological risk assessment in general. Environ Toxicol Chem 2017;36:480-491. © 2016 SETAC. © 2016 SETAC.

  8. Theoretical Methods of Domain Structures in Ultrathin Ferroelectric Films: A Review

    PubMed Central

    Liu, Jianyi; Chen, Weijin; Wang, Biao; Zheng, Yue

    2014-01-01

    This review covers methods and recent developments of the theoretical study of domain structures in ultrathin ferroelectric films. The review begins with an introduction to some basic concepts and theories (e.g., polarization and its modern theory, ferroelectric phase transition, domain formation, and finite size effects, etc.) that are relevant to the study of domain structures in ultrathin ferroelectric films. Basic techniques and recent progress of a variety of important approaches for domain structure simulation, including first-principles calculation, molecular dynamics, Monte Carlo simulation, effective Hamiltonian approach and phase field modeling, as well as multiscale simulation are then elaborated. For each approach, its important features and relative merits over other approaches for modeling domain structures in ultrathin ferroelectric films are discussed. Finally, we review recent theoretical studies on some important issues of domain structures in ultrathin ferroelectric films, with an emphasis on the effects of interfacial electrostatics, boundary conditions and external loads. PMID:28788198

  9. Unraveling the Mechanisms of Manual Therapy: Modeling an Approach.

    PubMed

    Bialosky, Joel E; Beneciuk, Jason M; Bishop, Mark D; Coronado, Rogelio A; Penza, Charles W; Simon, Corey B; George, Steven Z

    2018-01-01

    Synopsis Manual therapy interventions are popular among individual health care providers and their patients; however, systematic reviews do not strongly support their effectiveness. Small treatment effect sizes of manual therapy interventions may result from a "one-size-fits-all" approach to treatment. Mechanistic-based treatment approaches to manual therapy offer an intriguing alternative for identifying patients likely to respond to manual therapy. However, the current lack of knowledge of the mechanisms through which manual therapy interventions inhibit pain limits such an approach. The nature of manual therapy interventions further confounds such an approach, as the related mechanisms are likely a complex interaction of factors related to the patient, the provider, and the environment in which the intervention occurs. Therefore, a model to guide both study design and the interpretation of findings is necessary. We have previously proposed a model suggesting that the mechanical force from a manual therapy intervention results in systemic neurophysiological responses leading to pain inhibition. In this clinical commentary, we provide a narrative appraisal of the model and recommendations to advance the study of manual therapy mechanisms. J Orthop Sports Phys Ther 2018;48(1):8-18. doi:10.2519/jospt.2018.7476.

  10. Revisiting the pole tide for and from satellite altimetry

    NASA Astrophysics Data System (ADS)

    Desai, Shailen; Wahr, John; Beckley, Brian

    2015-12-01

    Satellite altimeter sea surface height observations include the geocentric displacements caused by the pole tide, namely the response of the solid Earth and oceans to polar motion. Most users of these data remove these effects using a model that was developed more than 20 years ago. We describe two improvements to the pole tide model for satellite altimeter measurements. Firstly, we recommend an approach that improves the model for the response of the oceans by including the effects of self-gravitation, loading, and mass conservation. Our recommended approach also specifically includes the previously ignored displacement of the solid Earth due to the load of the ocean response, and includes the effects of geocenter motion. Altogether, this improvement amplifies the modeled geocentric pole tide by 15 %, or up to 2 mm of sea surface height displacement. We validate this improvement using two decades of satellite altimeter measurements. Secondly, we recommend that the altimetry pole tide model exclude geocentric sea surface displacements resulting from the long-term drift in polar motion. The response to this particular component of polar motion requires a more rigorous approach than is used by conventional models. We show that erroneously including the response to this component of polar motion in the pole tide model impacts interpretation of regional sea level rise by ± 0.25 mm/year.

  11. A Systems Biology Approach to Toxicology Research with Small Fish Models

    EPA Science Inventory

    Increasing use of mechanistically-based molecular and biochemical endpoints and in vitro assays is being advocated as a more efficient and cost-effective approach for generating chemical hazard data. However, development of effective assays and application of the resulting data i...

  12. A values-based approach to medical leadership.

    PubMed

    Moen, Charlotte; Prescott, Patricia

    2016-11-02

    Integrity, trust and authenticity are essential characteristics of an effective leader, demonstrated through a values-based approach to leadership. This article explores whether Covey's (1989) principle-centred leadership model is a useful approach to developing doctors' leadership qualities and skills.

  13. Matrix approach to land carbon cycle modeling: A case study with the Community Land Model.

    PubMed

    Huang, Yuanyuan; Lu, Xingjie; Shi, Zheng; Lawrence, David; Koven, Charles D; Xia, Jianyang; Du, Zhenggang; Kluzek, Erik; Luo, Yiqi

    2018-03-01

    The terrestrial carbon (C) cycle has been commonly represented by a series of C balance equations to track C influxes into and effluxes out of individual pools in earth system models (ESMs). This representation matches our understanding of C cycle processes well but makes it difficult to track model behaviors. It is also computationally expensive, limiting the ability to conduct comprehensive parametric sensitivity analyses. To overcome these challenges, we have developed a matrix approach, which reorganizes the C balance equations in the original ESM into one matrix equation without changing any modeled C cycle processes and mechanisms. We applied the matrix approach to the Community Land Model (CLM4.5) with vertically-resolved biogeochemistry. The matrix equation exactly reproduces litter and soil organic carbon (SOC) dynamics of the standard CLM4.5 across different spatial-temporal scales. The matrix approach enables effective diagnosis of system properties such as C residence time and attribution of global change impacts to relevant processes. We illustrated, for example, the impacts of CO 2 fertilization on litter and SOC dynamics can be easily decomposed into the relative contributions from C input, allocation of external C into different C pools, nitrogen regulation, altered soil environmental conditions, and vertical mixing along the soil profile. In addition, the matrix tool can accelerate model spin-up, permit thorough parametric sensitivity tests, enable pool-based data assimilation, and facilitate tracking and benchmarking of model behaviors. Overall, the matrix approach can make a broad range of future modeling activities more efficient and effective. © 2017 John Wiley & Sons Ltd.

  14. Evaluating differential effects using regression interactions and regression mixture models

    PubMed Central

    Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung

    2015-01-01

    Research increasingly emphasizes understanding differential effects. This paper focuses on understanding regression mixture models, a relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their formulation, and their assumptions are compared using Monte Carlo simulations and real data analysis. The capabilities of regression mixture models are described and specific issues to be addressed when conducting regression mixtures are proposed. The paper aims to clarify the role that regression mixtures can take in the estimation of differential effects and increase awareness of the benefits and potential pitfalls of this approach. Regression mixture models are shown to be a potentially effective exploratory method for finding differential effects when these effects can be defined by a small number of classes of respondents who share a typical relationship between a predictor and an outcome. It is also shown that the comparison between regression mixture models and interactions becomes substantially more complex as the number of classes increases. It is argued that regression interactions are well suited for direct tests of specific hypotheses about differential effects and regression mixtures provide a useful approach for exploring effect heterogeneity given adequate samples and study design. PMID:26556903

  15. Integrated modelling of nitrate loads to coastal waters and land rent applied to catchment-scale water management.

    PubMed

    Refsgaard, A; Jacobsen, T; Jacobsen, B; Ørum, J-E

    2007-01-01

    The EU Water Framework Directive (WFD) requires an integrated approach to river basin management in order to meet environmental and ecological objectives. This paper presents concepts and full-scale application of an integrated modelling framework. The Ringkoebing Fjord basin is characterized by intensive agricultural production and leakage of nitrate constitute a major pollution problem with respect groundwater aquifers (drinking water), fresh surface water systems (water quality of lakes) and coastal receiving waters (eutrophication). The case study presented illustrates an advanced modelling approach applied in river basin management. Point sources (e.g. sewage treatment plant discharges) and distributed diffuse sources (nitrate leakage) are included to provide a modelling tool capable of simulating pollution transport from source to recipient to analyse the effects of specific, localized basin water management plans. The paper also includes a land rent modelling approach which can be used to choose the most cost-effective measures and the location of these measures. As a forerunner to the use of basin-scale models in WFD basin water management plans this project demonstrates the potential and limitations of comprehensive, integrated modelling tools.

  16. Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach

    NASA Technical Reports Server (NTRS)

    Aguilo, Miguel A.; Warner, James E.

    2017-01-01

    This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.

  17. Modelling the Cost Effectiveness of Disease-Modifying Treatments for Multiple Sclerosis

    PubMed Central

    Thompson, Joel P.; Abdolahi, Amir; Noyes, Katia

    2013-01-01

    Several cost-effectiveness models of disease-modifying treatments (DMTs) for multiple sclerosis (MS) have been developed for different populations and different countries. Vast differences in the approaches and discrepancies in the results give rise to heated discussions and limit the use of these models. Our main objective is to discuss the methodological challenges in modelling the cost effectiveness of treatments for MS. We conducted a review of published models to describe the approaches taken to date, to identify the key parameters that influence the cost effectiveness of DMTs, and to point out major areas of weakness and uncertainty. Thirty-six published models and analyses were identified. The greatest source of uncertainty is the absence of head-to-head randomized clinical trials. Modellers have used various techniques to compensate, including utilizing extension trials. The use of large observational cohorts in recent studies aids in identifying population-based, ‘real-world’ treatment effects. Major drivers of results include the time horizon modelled and DMT acquisition costs. Model endpoints must target either policy makers (using cost-utility analysis) or clinicians (conducting cost-effectiveness analyses). Lastly, the cost effectiveness of DMTs outside North America and Europe is currently unknown, with the lack of country-specific data as the major limiting factor. We suggest that limited data should not preclude analyses, as models may be built and updated in the future as data become available. Disclosure of modelling methods and assumptions could improve the transferability and applicability of models designed to reflect different healthcare systems. PMID:23640103

  18. An approach to combining parallel and cross-over trials with and without run-in periods using individual patient data.

    PubMed

    Tvete, Ingunn F; Olsen, Inge C; Fagerland, Morten W; Meland, Nils; Aldrin, Magne; Smerud, Knut T; Holden, Lars

    2012-04-01

    In active run-in trials, where patients may be excluded after a run-in period based on their response to the treatment, it is implicitly assumed that patients have individual treatment effects. If individual patient data are available, active run-in trials can be modelled using patient-specific random effects. With more than one trial on the same medication available, one can obtain a more precise overall treatment effect estimate. We present a model for joint analysis of a two-sequence, four-period cross-over trial (AABB/BBAA) and a three-sequence, two-period active run-in trial (AB/AA/A), where the aim is to investigate the effect of a new treatment for patients with pain due to osteoarthritis. Our approach enables us to separately estimate the direct treatment effect for all patients, for the patients excluded after the active run-in trial prior to randomisation, and for the patients who completed the active run-in trial. A similar model approach can be used to analyse other types of run-in trials, but this depends on the data and type of other trials available. We assume equality of the various carry-over effects over time. The proposed approach is flexible and can be modified to handle other designs. Our results should be encouraging for those responsible for planning cost-efficient clinical development programmes.

  19. Identification Approach to Alleviate Effects of Unmeasured Heat Gains for MIMO Building Thermal Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Jie; Kim, Donghun; Braun, James E.

    It is important to have practical methods for constructing a good mathematical model for a building's thermal system for energy audits, retrofit analysis and advanced building controls, e.g. model predictive control. Identification approaches based on semi-physical model structures are popular in building science for those purposes. However conventional gray box identification approaches applied to thermal networks would fail when significant unmeasured heat gains present in estimation data. Although this situation is very common and practical, there has been little research to tackle this issue in building science. This paper presents an overall identification approach to alleviate influences of unmeasured disturbances,more » and hence to obtain improved gray-box building models. The approach was applied to an existing open space building and the performance is demonstrated.« less

  20. Putting the psychology back into psychological models: mechanistic versus rational approaches.

    PubMed

    Sakamoto, Yasuaki; Jones, Mattr; Love, Bradley C

    2008-09-01

    Two basic approaches to explaining the nature of the mind are the rational and the mechanistic approaches. Rational analyses attempt to characterize the environment and the behavioral outcomes that humans seek to optimize, whereas mechanistic models attempt to simulate human behavior using processes and representations analogous to those used by humans. We compared these approaches with regard to their accounts of how humans learn the variability of categories. The mechanistic model departs in subtle ways from rational principles. In particular, the mechanistic model incrementally updates its estimates of category means and variances through error-driven learning, based on discrepancies between new category members and the current representation of each category. The model yields a prediction, which we verify, regarding the effects of order manipulations that the rational approach does not anticipate. Although both rational and mechanistic models can successfully postdict known findings, we suggest that psychological advances are driven primarily by consideration of process and representation and that rational accounts trail these breakthroughs.

  1. A robust quantitative near infrared modeling approach for blend monitoring.

    PubMed

    Mohan, Shikhar; Momose, Wataru; Katz, Jeffrey M; Hossain, Md Nayeem; Velez, Natasha; Drennen, James K; Anderson, Carl A

    2018-01-30

    This study demonstrates a material sparing Near-Infrared modeling approach for powder blend monitoring. In this new approach, gram scale powder mixtures are subjected to compression loads to simulate the effect of scale using an Instron universal testing system. Models prepared by the new method development approach (small-scale method) and by a traditional method development (blender-scale method) were compared by simultaneously monitoring a 1kg batch size blend run. Both models demonstrated similar model performance. The small-scale method strategy significantly reduces the total resources expended to develop Near-Infrared calibration models for on-line blend monitoring. Further, this development approach does not require the actual equipment (i.e., blender) to which the method will be applied, only a similar optical interface. Thus, a robust on-line blend monitoring method can be fully developed before any large-scale blending experiment is viable, allowing the blend method to be used during scale-up and blend development trials. Copyright © 2017. Published by Elsevier B.V.

  2. Bayesian informative dropout model for longitudinal binary data with random effects using conditional and joint modeling approaches.

    PubMed

    Chan, Jennifer S K

    2016-05-01

    Dropouts are common in longitudinal study. If the dropout probability depends on the missing observations at or after dropout, this type of dropout is called informative (or nonignorable) dropout (ID). Failure to accommodate such dropout mechanism into the model will bias the parameter estimates. We propose a conditional autoregressive model for longitudinal binary data with an ID model such that the probabilities of positive outcomes as well as the drop-out indicator in each occasion are logit linear in some covariates and outcomes. This model adopting a marginal model for outcomes and a conditional model for dropouts is called a selection model. To allow for the heterogeneity and clustering effects, the outcome model is extended to incorporate mixture and random effects. Lastly, the model is further extended to a novel model that models the outcome and dropout jointly such that their dependency is formulated through an odds ratio function. Parameters are estimated by a Bayesian approach implemented using the user-friendly Bayesian software WinBUGS. A methadone clinic dataset is analyzed to illustrate the proposed models. Result shows that the treatment time effect is still significant but weaker after allowing for an ID process in the data. Finally the effect of drop-out on parameter estimates is evaluated through simulation studies. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Continuum modeling of large lattice structures: Status and projections

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Mikulas, Martin M., Jr.

    1988-01-01

    The status and some recent developments of continuum modeling for large repetitive lattice structures are summarized. Discussion focuses on a number of aspects including definition of an effective substitute continuum; characterization of the continuum model; and the different approaches for generating the properties of the continuum, namely, the constitutive matrix, the matrix of mass densities, and the matrix of thermal coefficients. Also, a simple approach is presented for generating the continuum properties. The approach can be used to generate analytic and/or numerical values of the continuum properties.

  4. Predicting flight delay based on multiple linear regression

    NASA Astrophysics Data System (ADS)

    Ding, Yi

    2017-08-01

    Delay of flight has been regarded as one of the toughest difficulties in aviation control. How to establish an effective model to handle the delay prediction problem is a significant work. To solve the problem that the flight delay is difficult to predict, this study proposes a method to model the arriving flights and a multiple linear regression algorithm to predict delay, comparing with Naive-Bayes and C4.5 approach. Experiments based on a realistic dataset of domestic airports show that the accuracy of the proposed model approximates 80%, which is further improved than the Naive-Bayes and C4.5 approach approaches. The result testing shows that this method is convenient for calculation, and also can predict the flight delays effectively. It can provide decision basis for airport authorities.

  5. Vehicle track segmentation using higher order random fields

    DOE PAGES

    Quach, Tu -Thach

    2017-01-09

    Here, we present an approach to segment vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images taken at different times. The approach uses multiscale higher order random field models to capture track statistics, such as curvatures and their parallel nature, that are not currently utilized in existing methods. These statistics are encoded as 3-by-3 patterns at different scales. The model can complete disconnected tracks often caused by sensor noise and various environmental effects. Coupling the model with a simple classifier, our approach is effective at segmenting salient tracks. We improve the F-measure onmore » a standard vehicle track data set to 0.963, up from 0.897 obtained by the current state-of-the-art method.« less

  6. Vehicle track segmentation using higher order random fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quach, Tu -Thach

    Here, we present an approach to segment vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images taken at different times. The approach uses multiscale higher order random field models to capture track statistics, such as curvatures and their parallel nature, that are not currently utilized in existing methods. These statistics are encoded as 3-by-3 patterns at different scales. The model can complete disconnected tracks often caused by sensor noise and various environmental effects. Coupling the model with a simple classifier, our approach is effective at segmenting salient tracks. We improve the F-measure onmore » a standard vehicle track data set to 0.963, up from 0.897 obtained by the current state-of-the-art method.« less

  7. Modelling Common Agricultural Policy-Water Framework Directive interactions and cost-effectiveness of measures to reduce nitrogen pollution.

    PubMed

    Mouratiadou, Ioanna; Russell, Graham; Topp, Cairistiona; Louhichi, Kamel; Moran, Dominic

    2010-01-01

    Selecting cost-effective measures to regulate agricultural water pollution to conform to the Water Framework Directive presents multiple challenges. A bio-economic modelling approach is presented that has been used to explore the water quality and economic effects of the 2003 Common Agricultural Policy Reform and to assess the cost-effectiveness of input quotas and emission standards against nitrate leaching, in a representative case study catchment in Scotland. The approach combines a biophysical model (NDICEA) with a mathematical programming model (FSSIM-MP). The results indicate only small changes due to the Reform, with the main changes in farmers' decision making and the associated economic and water quality indicators depending on crop price changes, and suggest the use of target fertilisation in relation to crop and soil requirements, as opposed to measures targeting farm total or average nitrogen use.

  8. Conceptual model for assessing criteria air pollutants in a multipollutant context: A modified adverse outcome pathway approach.

    PubMed

    Buckley, Barbara; Farraj, Aimen

    2015-09-01

    Air pollution consists of a complex mixture of particulate and gaseous components. Individual criteria and other hazardous air pollutants have been linked to adverse respiratory and cardiovascular health outcomes. However, assessing risk of air pollutant mixtures is difficult since components are present in different combinations and concentrations in ambient air. Recent mechanistic studies have limited utility because of the inability to link measured changes to adverse outcomes that are relevant to risk assessment. New approaches are needed to address this challenge. The purpose of this manuscript is to describe a conceptual model, based on the adverse outcome pathway approach, which connects initiating events at the cellular and molecular level to population-wide impacts. This may facilitate hazard assessment of air pollution mixtures. In the case reports presented here, airway hyperresponsiveness and endothelial dysfunction are measurable endpoints that serve to integrate the effects of individual criteria air pollutants found in inhaled mixtures. This approach incorporates information from experimental and observational studies into a sequential series of higher order effects. The proposed model has the potential to facilitate multipollutant risk assessment by providing a framework that can be used to converge the effects of air pollutants in light of common underlying mechanisms. This approach may provide a ready-to-use tool to facilitate evaluation of health effects resulting from exposure to air pollution mixtures. Published by Elsevier Ireland Ltd.

  9. Effective model approach to the dense state of QCD matter

    NASA Astrophysics Data System (ADS)

    Fukushima, Kenji

    2011-12-01

    The first-principle approach to the dense state of QCD matter, i.e. the lattice-QCD simulation at finite baryon density, is not under theoretical control for the moment. The effective model study based on QCD symmetries is a practical alternative. However the model parameters that are fixed by hadronic properties in the vacuum may have unknown dependence on the baryon chemical potential. We propose a new prescription to constrain the effective model parameters by the matching condition with the thermal Statistical Model. In the transitional region where thermal quantities blow up in the Statistical Model, deconfined quarks and gluons should smoothly take over the relevant degrees of freedom from hadrons and resonances. We use the Polyakov-loop coupled Nambu-Jona-Lasinio (PNJL) model as an effective description in the quark side and show how the matching condition is satisfied by a simple ansäatz on the Polyakov loop potential. Our results favor a phase diagram with the chiral phase transition located at slightly higher temperature than deconfinement which stays close to the chemical freeze-out points.

  10. How models can support ecosystem-based management of coral reefs

    NASA Astrophysics Data System (ADS)

    Weijerman, Mariska; Fulton, Elizabeth A.; Janssen, Annette B. G.; Kuiper, Jan J.; Leemans, Rik; Robson, Barbara J.; van de Leemput, Ingrid A.; Mooij, Wolf M.

    2015-11-01

    Despite the importance of coral reef ecosystems to the social and economic welfare of coastal communities, the condition of these marine ecosystems have generally degraded over the past decades. With an increased knowledge of coral reef ecosystem processes and a rise in computer power, dynamic models are useful tools in assessing the synergistic effects of local and global stressors on ecosystem functions. We review representative approaches for dynamically modeling coral reef ecosystems and categorize them as minimal, intermediate and complex models. The categorization was based on the leading principle for model development and their level of realism and process detail. This review aims to improve the knowledge of concurrent approaches in coral reef ecosystem modeling and highlights the importance of choosing an appropriate approach based on the type of question(s) to be answered. We contend that minimal and intermediate models are generally valuable tools to assess the response of key states to main stressors and, hence, contribute to understanding ecological surprises. As has been shown in freshwater resources management, insight into these conceptual relations profoundly influences how natural resource managers perceive their systems and how they manage ecosystem recovery. We argue that adaptive resource management requires integrated thinking and decision support, which demands a diversity of modeling approaches. Integration can be achieved through complimentary use of models or through integrated models that systemically combine all relevant aspects in one model. Such whole-of-system models can be useful tools for quantitatively evaluating scenarios. These models allow an assessment of the interactive effects of multiple stressors on various, potentially conflicting, management objectives. All models simplify reality and, as such, have their weaknesses. While minimal models lack multidimensionality, system models are likely difficult to interpret as they require many efforts to decipher the numerous interactions and feedback loops. Given the breadth of questions to be tackled when dealing with coral reefs, the best practice approach uses multiple model types and thus benefits from the strength of different models types.

  11. Effects of Scenario Planning on Participant Mental Models

    ERIC Educational Resources Information Center

    Glick, Margaret B.; Chermack, Thomas J.; Luckel, Henry; Gauck, Brian Q.

    2012-01-01

    Purpose: The purpose of this paper is to assess the effects of scenario planning on participant mental model styles. Design/methodology/approach: The scenario planning literature is consistent with claims that scenario planning can change individual mental models. These claims are supported by anecdotal evidence and stories from the practical…

  12. [Systematization and hygienic standardization of environmental factors on the basis of common graphic models].

    PubMed

    Galkin, A A

    2012-01-01

    On the basis of graphic models of the human response to environmental factors, two main types of complex quantitative influence as well as interrelation between determined effects at the level of an individual, and stochastic effects on population were revealed. Two main kinds of factors have been suggested to be distinguished. They are essential factors and accidental factors. The essential factors are common for environment. The accidental factors are foreign for environment. The above two kinds are different in approaches of hygienic standardization Accidental factors need a dot-like approach, whereas a two-level range approach is suitable for the essential factors.

  13. Assessing the polycyclic aromatic hydrocarbon (PAH) pollution of urban stormwater runoff: a dynamic modeling approach.

    PubMed

    Zheng, Yi; Lin, Zhongrong; Li, Hao; Ge, Yan; Zhang, Wei; Ye, Youbin; Wang, Xuejun

    2014-05-15

    Urban stormwater runoff delivers a significant amount of polycyclic aromatic hydrocarbons (PAHs), mostly of atmospheric origin, to receiving water bodies. The PAH pollution of urban stormwater runoff poses serious risk to aquatic life and human health, but has been overlooked by environmental modeling and management. This study proposed a dynamic modeling approach for assessing the PAH pollution and its associated environmental risk. A variable time-step model was developed to simulate the continuous cycles of pollutant buildup and washoff. To reflect the complex interaction among different environmental media (i.e. atmosphere, dust and stormwater), the dependence of the pollution level on antecedent weather conditions was investigated and embodied in the model. Long-term simulations of the model can be efficiently performed, and probabilistic features of the pollution level and its risk can be easily determined. The applicability of this approach and its value to environmental management was demonstrated by a case study in Beijing, China. The results showed that Beijing's PAH pollution of road runoff is relatively severe, and its associated risk exhibits notable seasonal variation. The current sweeping practice is effective in mitigating the pollution, but the effectiveness is both weather-dependent and compound-dependent. The proposed modeling approach can help identify critical timing and major pollutants for monitoring, assessing and controlling efforts to be focused on. The approach is extendable to other urban areas, as well as to other contaminants with similar fate and transport as PAHs. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Simulating land surface energy fluxes using a microscopic root water uptake approach in a northern temperate forest

    NASA Astrophysics Data System (ADS)

    He, L.; Ivanov, V. Y.; Schneider, C.

    2012-12-01

    The predictive accuracy of current land surface models has been limited by uncertainties in modeling transpiration and its sensitivity to the plant-available water in the root zone. Models usually distribute vegetation transpiration demand as sink terms in one-dimensional soil-water accounting model, according to the vertical root density profile. During water-limited situations, the sink terms are constrained using a heuristic "Feddes-type" water stress function. This approach significantly simplifies the actual three-dimensional physical process of root water uptake and may predict an early onset of water-limited transpiration. Recently, a microscopic root water uptake approach was proposed to simulate the three-dimensional radial moisture fluxes from the soil to roots, and water flux transfer processes along the root systems. During dry conditions, this approach permits the compensation of decreased root water uptake in water-stressed regions by increasing uptake density in moister regions. This effect cannot be captured by the Feddes heuristic function. This study "loosely" incorporates the microscopic root water uptake approach based on aRoot model into an ecohydrological model tRIBS+VEGGIE. The ecohydrological model provides boundary conditions for the microscopic root water uptake model (e.g., potential transpiration, soil evaporation, and precipitation influx), and the latter computes the actual transpiration and profiles of sink terms. Based on the departure of the actual latent heat flux from the potential value, the other energy budget components are adjusted. The study is conducted for a northern temperate mixed forest near the University of Michigan Biological Station. Observational evidence for this site suggests little-to-no control of transpiration by soil moisture yet the commonly used Feddes-type approach implies severe water limitation on transpiration during dry episodes. The study addresses two species: oak and aspen. The effects of differences in root architecture on actual transpiration are explored. The energy components simulated with the microscopic modeling approach are tested against observational data. Through the improved spatiotemporal representation of small-scale root water uptake process, the microscopic modeling framework leads to a better agreement with the observational data than the Feddes-type approach. During dry periods, relatively high transpiration is sustained, as water uptake regions shift from densely to sparsely rooted layers, or from drier to moister soil areas. Implications and approaches for incorporating microscopic modeling methodologies within large-scale land-surface parameterizations are discussed.

  15. Operational validation of a multi-period and multi-criteria model conditioning approach for the prediction of rainfall-runoff processes in small forest catchments

    NASA Astrophysics Data System (ADS)

    Choi, H.; Kim, S.

    2012-12-01

    Most of hydrologic models have generally been used to describe and represent the spatio-temporal variability of hydrological processes in the watershed scale. Though it is an obvious fact that hydrological responses have the time varying nature, optimal values of model parameters were normally considered as time invariants or constants in most cases. The recent paper of Choi and Beven (2007) presents a multi-period and multi-criteria model conditioning approach. The approach is based on the equifinality thesis within the Generalised Likelihood Uncertainty Estimation (GLUE) framework. In their application, the behavioural TOPMODEL parameter sets are determined by several performance measures for global (annual) and short (30-days) periods, clustered using a Fuzzy C-means algorithm, into 15 types representing different hydrological conditions. Their study shows a good performance on the calibration of a rainfall-runoff model in a forest catchment, and also gives strong indications that it is uncommon to find model realizations that were behavioural over all multi-periods and all performance measures, and multi-period model conditioning approach may become new effective tool for predictions of hydrological processes in ungauged catchments. This study is a follow-up study on the Choi and Beven's (2007) model conditioning approach to test how the approach is effective for the prediction of rainfall-runoff responses in ungauged catchments. To achieve this purpose, 6 small forest catchments are selected among the several hydrological experimental catchments operated by Korea Forest Research Institute. In each catchment, long-term hydrological time series data varying from 10 to 30 years were available. The areas of the selected catchments range from 13.6 to 37.8 ha, and all areas are covered by coniferous or broad-leaves forests. The selected catchments locate in the southern coastal area to the northern part of South Korea. The bed rocks are Granite gneiss, Granite or Limestone. The study is progressed based on the followings. Firstly, hydrological time series of each catchment are sampled and clustered into multi-period having distinctly different temporal characteristics, and secondly, behavioural parameter distributions are determined in each multi-period based on the specification of multi-criteria model performance measures. Finally, behavioural parameter sets of each multi-period of single catchment are applied on the corresponding period of other catchments, and the cross-validations are conducted in this manner for all catchments The multi-period model conditioning approach is clearly effective to reduce the width of prediction limits, giving better model performance against the temporal variability of hydrological characteristics, and has enough potential to be the effective prediction tool for ungauged catchments. However, more advanced and continuous studies are needed to expand the application of this approach in prediction of hydrological responses in ungauged catchments,

  16. Domino effects within a chemical cluster: a game-theoretical modeling approach by using Nash-equilibrium.

    PubMed

    Reniers, Genserik; Dullaert, Wout; Karel, Soudan

    2009-08-15

    Every company situated within a chemical cluster faces domino effect risks, whose magnitude depends on every company's own risk management strategies and on those of all others. Preventing domino effects is therefore very important to avoid catastrophes in the chemical process industry. Given that chemical companies are interlinked by domino effect accident links, there is some likelihood that even if certain companies fully invest in domino effects prevention measures, they can nonetheless experience an external domino effect caused by an accident which occurred in another chemical enterprise of the cluster. In this article a game-theoretic approach to interpret and model behaviour of chemical plants within chemical clusters while negotiating and deciding on domino effects prevention investments is employed.

  17. The Random-Effect DINA Model

    ERIC Educational Resources Information Center

    Huang, Hung-Yu; Wang, Wen-Chung

    2014-01-01

    The DINA (deterministic input, noisy, and gate) model has been widely used in cognitive diagnosis tests and in the process of test development. The outcomes known as slip and guess are included in the DINA model function representing the responses to the items. This study aimed to extend the DINA model by using the random-effect approach to allow…

  18. Generation of net sediment transport by velocity skewness in oscillatory sheet flow

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Li, Yong; Chen, Genfa; Wang, Fujun; Tang, Xuelin

    2018-01-01

    This study utilizes a qualitative approach and a two-phase numerical model to investigate net sediment transport caused by velocity skewness beneath oscillatory sheet flow and current. The qualitative approach is derived based on the pseudo-laminar approximation of boundary layer velocity and exponential approximation of concentration. The two-phase model can obtain well the instantaneous erosion depth, sediment flux, boundary layer thickness, and sediment transport rate. It can especially illustrate the difference between positive and negative flow stages caused by velocity skewness, which is considerably important in determining the net boundary layer flow and sediment transport direction. The two-phase model also explains the effect of sediment diameter and phase-lag to sediment transport by comparing the instantaneous-type formulas to better illustrate velocity skewness effect. In previous studies about sheet flow transport in pure velocity-skewed flows, net sediment transport is only attributed to the phase-lag effect. In the present study with the qualitative approach and two-phase model, phase-lag effect is shown important but not sufficient for the net sediment transport beneath pure velocity-skewed flow and current, while the asymmetric wave boundary layer development between positive and negative flow stages also contributes to the sediment transport.

  19. Analysis of Longitudinal Studies With Repeated Outcome Measures: Adjusting for Time-Dependent Confounding Using Conventional Methods.

    PubMed

    Keogh, Ruth H; Daniel, Rhian M; VanderWeele, Tyler J; Vansteelandt, Stijn

    2018-05-01

    Estimation of causal effects of time-varying exposures using longitudinal data is a common problem in epidemiology. When there are time-varying confounders, which may include past outcomes, affected by prior exposure, standard regression methods can lead to bias. Methods such as inverse probability weighted estimation of marginal structural models have been developed to address this problem. However, in this paper we show how standard regression methods can be used, even in the presence of time-dependent confounding, to estimate the total effect of an exposure on a subsequent outcome by controlling appropriately for prior exposures, outcomes, and time-varying covariates. We refer to the resulting estimation approach as sequential conditional mean models (SCMMs), which can be fitted using generalized estimating equations. We outline this approach and describe how including propensity score adjustment is advantageous. We compare the causal effects being estimated using SCMMs and marginal structural models, and we compare the two approaches using simulations. SCMMs enable more precise inferences, with greater robustness against model misspecification via propensity score adjustment, and easily accommodate continuous exposures and interactions. A new test for direct effects of past exposures on a subsequent outcome is described.

  20. Predictive Models for Semiconductor Device Design and Processing

    NASA Technical Reports Server (NTRS)

    Meyyappan, Meyya; Arnold, James O. (Technical Monitor)

    1998-01-01

    The device feature size continues to be on a downward trend with a simultaneous upward trend in wafer size to 300 mm. Predictive models are needed more than ever before for this reason. At NASA Ames, a Device and Process Modeling effort has been initiated recently with a view to address these issues. Our activities cover sub-micron device physics, process and equipment modeling, computational chemistry and material science. This talk would outline these efforts and emphasize the interaction among various components. The device physics component is largely based on integrating quantum effects into device simulators. We have two parallel efforts, one based on a quantum mechanics approach and the second, a semiclassical hydrodynamics approach with quantum correction terms. Under the first approach, three different quantum simulators are being developed and compared: a nonequlibrium Green's function (NEGF) approach, Wigner function approach, and a density matrix approach. In this talk, results using various codes will be presented. Our process modeling work focuses primarily on epitaxy and etching using first-principles models coupling reactor level and wafer level features. For the latter, we are using a novel approach based on Level Set theory. Sample results from this effort will also be presented.

  1. The Effect of Classroom Teachers' Attitudes toward Constructivist Approach on Their Level of Establishing a Constructivist Learning Environment: A Case of Mersin

    ERIC Educational Resources Information Center

    Uredi, Lutfi

    2013-01-01

    This study aims to determine the attitudes of classroom teachers towards constructivist approach and to analyze the effect of their attitudes towards constructivist approach on their level of creating a constructivist learning environment. For that purpose, relational screening model was used in the research. The research sample included 504…

  2. Multiscale Modeling and Uncertainty Quantification for Nuclear Fuel Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Estep, Donald; El-Azab, Anter; Pernice, Michael

    2017-03-23

    In this project, we will address the challenges associated with constructing high fidelity multiscale models of nuclear fuel performance. We (*) propose a novel approach for coupling mesoscale and macroscale models, (*) devise efficient numerical methods for simulating the coupled system, and (*) devise and analyze effective numerical approaches for error and uncertainty quantification for the coupled multiscale system. As an integral part of the project, we will carry out analysis of the effects of upscaling and downscaling, investigate efficient methods for stochastic sensitivity analysis of the individual macroscale and mesoscale models, and carry out a posteriori error analysis formore » computed results. We will pursue development and implementation of solutions in software used at Idaho National Laboratories on models of interest to the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program.« less

  3. Towards sustainable e-health networks: does modeling support efficient management and operation?

    PubMed

    Staemmler, Martin

    2007-01-01

    e-Health Networks require cost-effective approaches for routine operation to achieve long-lasting sustainability. By using a model to represent (i) the network's enterprise functions, (ii) the applications used and (iii) the physical implementations, the tasks of management, adapting to changes and providing continued maintenance can be effectively supported. The paper discusses approaches for modeling, assesses their usefulness for the above tasks and decides on the use of the 3LGM meta model. Based on this concept, three ways for modeling the specific properties of an e-Health network are presented, leading to the decision to represent the hospitals involved in only one layer. As a result the model derived is presented, assessed and proved to support strategic management, day-to-day maintenance and documentation.

  4. Historical HIV incidence modelling in regional subgroups: use of flexible discrete models with penalized splines based on prior curves.

    PubMed

    Greenland, S

    1996-03-15

    This paper presents an approach to back-projection (back-calculation) of human immunodeficiency virus (HIV) person-year infection rates in regional subgroups based on combining a log-linear model for subgroup differences with a penalized spline model for trends. The penalized spline approach allows flexible trend estimation but requires far fewer parameters than fully non-parametric smoothers, thus saving parameters that can be used in estimating subgroup effects. Use of reasonable prior curve to construct the penalty function minimizes the degree of smoothing needed beyond model specification. The approach is illustrated in application to acquired immunodeficiency syndrome (AIDS) surveillance data from Los Angeles County.

  5. An integrated approach to evaluating alternative risk prediction strategies: a case study comparing alternative approaches for preventing invasive fungal disease.

    PubMed

    Sadique, Z; Grieve, R; Harrison, D A; Jit, M; Allen, E; Rowan, K M

    2013-12-01

    This article proposes an integrated approach to the development, validation, and evaluation of new risk prediction models illustrated with the Fungal Infection Risk Evaluation study, which developed risk models to identify non-neutropenic, critically ill adult patients at high risk of invasive fungal disease (IFD). Our decision-analytical model compared alternative strategies for preventing IFD at up to three clinical decision time points (critical care admission, after 24 hours, and end of day 3), followed with antifungal prophylaxis for those judged "high" risk versus "no formal risk assessment." We developed prognostic models to predict the risk of IFD before critical care unit discharge, with data from 35,455 admissions to 70 UK adult, critical care units, and validated the models externally. The decision model was populated with positive predictive values and negative predictive values from the best-fitting risk models. We projected lifetime cost-effectiveness and expected value of partial perfect information for groups of parameters. The risk prediction models performed well in internal and external validation. Risk assessment and prophylaxis at the end of day 3 was the most cost-effective strategy at the 2% and 1% risk threshold. Risk assessment at each time point was the most cost-effective strategy at a 0.5% risk threshold. Expected values of partial perfect information were high for positive predictive values or negative predictive values (£11 million-£13 million) and quality-adjusted life-years (£11 million). It is cost-effective to formally assess the risk of IFD for non-neutropenic, critically ill adult patients. This integrated approach to developing and evaluating risk models is useful for informing clinical practice and future research investment. © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR) Published by International Society for Pharmacoeconomics and Outcomes Research (ISPOR) All rights reserved.

  6. Stochastic Technology Choice Model for Consequential Life Cycle Assessment.

    PubMed

    Kätelhön, Arne; Bardow, André; Suh, Sangwon

    2016-12-06

    Discussions on Consequential Life Cycle Assessment (CLCA) have relied largely on partial or general equilibrium models. Such models are useful for integrating market effects into CLCA, but also have well-recognized limitations such as the poor granularity of the sectoral definition and the assumption of perfect oversight by all economic agents. Building on the Rectangular-Choice-of-Technology (RCOT) model, this study proposes a new modeling approach for CLCA, the Technology Choice Model (TCM). In this approach, the RCOT model is adapted for its use in CLCA and extended to incorporate parameter uncertainties and suboptimal decisions due to market imperfections and information asymmetry in a stochastic setting. In a case study on rice production, we demonstrate that the proposed approach allows modeling of complex production technology mixes and their expected environmental outcomes under uncertainty, at a high level of detail. Incorporating the effect of production constraints, uncertainty, and suboptimal decisions by economic agents significantly affects technology mixes and associated greenhouse gas (GHG) emissions of the system under study. The case study also shows the model's ability to determine both the average and marginal environmental impacts of a product in response to changes in the quantity of final demand.

  7. A Bayesian Model for the Estimation of Latent Interaction and Quadratic Effects When Latent Variables Are Non-Normally Distributed

    ERIC Educational Resources Information Center

    Kelava, Augustin; Nagengast, Benjamin

    2012-01-01

    Structural equation models with interaction and quadratic effects have become a standard tool for testing nonlinear hypotheses in the social sciences. Most of the current approaches assume normally distributed latent predictor variables. In this article, we present a Bayesian model for the estimation of latent nonlinear effects when the latent…

  8. Combination of a proteomics approach and reengineering of meso scale network models for prediction of mode-of-action for tyrosine kinase inhibitors.

    PubMed

    Balabanov, Stefan; Wilhelm, Thomas; Venz, Simone; Keller, Gunhild; Scharf, Christian; Pospisil, Heike; Braig, Melanie; Barett, Christine; Bokemeyer, Carsten; Walther, Reinhard; Brümmendorf, Tim H; Schuppert, Andreas

    2013-01-01

    In drug discovery, the characterisation of the precise modes of action (MoA) and of unwanted off-target effects of novel molecularly targeted compounds is of highest relevance. Recent approaches for identification of MoA have employed various techniques for modeling of well defined signaling pathways including structural information, changes in phenotypic behavior of cells and gene expression patterns after drug treatment. However, efficient approaches focusing on proteome wide data for the identification of MoA including interference with mutations are underrepresented. As mutations are key drivers of drug resistance in molecularly targeted tumor therapies, efficient analysis and modeling of downstream effects of mutations on drug MoA is a key to efficient development of improved targeted anti-cancer drugs. Here we present a combination of a global proteome analysis, reengineering of network models and integration of apoptosis data used to infer the mode-of-action of various tyrosine kinase inhibitors (TKIs) in chronic myeloid leukemia (CML) cell lines expressing wild type as well as TKI resistance conferring mutants of BCR-ABL. The inferred network models provide a tool to predict the main MoA of drugs as well as to grouping of drugs with known similar kinase inhibitory activity patterns in comparison to drugs with an additional MoA. We believe that our direct network reconstruction approach, demonstrated on proteomics data, can provide a complementary method to the established network reconstruction approaches for the preclinical modeling of the MoA of various types of targeted drugs in cancer treatment. Hence it may contribute to the more precise prediction of clinically relevant on- and off-target effects of TKIs.

  9. Combination of a Proteomics Approach and Reengineering of Meso Scale Network Models for Prediction of Mode-of-Action for Tyrosine Kinase Inhibitors

    PubMed Central

    Balabanov, Stefan; Wilhelm, Thomas; Venz, Simone; Keller, Gunhild; Scharf, Christian; Pospisil, Heike; Braig, Melanie; Barett, Christine; Bokemeyer, Carsten; Walther, Reinhard

    2013-01-01

    In drug discovery, the characterisation of the precise modes of action (MoA) and of unwanted off-target effects of novel molecularly targeted compounds is of highest relevance. Recent approaches for identification of MoA have employed various techniques for modeling of well defined signaling pathways including structural information, changes in phenotypic behavior of cells and gene expression patterns after drug treatment. However, efficient approaches focusing on proteome wide data for the identification of MoA including interference with mutations are underrepresented. As mutations are key drivers of drug resistance in molecularly targeted tumor therapies, efficient analysis and modeling of downstream effects of mutations on drug MoA is a key to efficient development of improved targeted anti-cancer drugs. Here we present a combination of a global proteome analysis, reengineering of network models and integration of apoptosis data used to infer the mode-of-action of various tyrosine kinase inhibitors (TKIs) in chronic myeloid leukemia (CML) cell lines expressing wild type as well as TKI resistance conferring mutants of BCR-ABL. The inferred network models provide a tool to predict the main MoA of drugs as well as to grouping of drugs with known similar kinase inhibitory activity patterns in comparison to drugs with an additional MoA. We believe that our direct network reconstruction approach, demonstrated on proteomics data, can provide a complementary method to the established network reconstruction approaches for the preclinical modeling of the MoA of various types of targeted drugs in cancer treatment. Hence it may contribute to the more precise prediction of clinically relevant on- and off-target effects of TKIs. PMID:23326482

  10. An overview of modelling approaches and potential solution towards an endgame of tobacco

    NASA Astrophysics Data System (ADS)

    Halim, Tisya Farida Abdul; Sapiri, Hasimah; Abidin, Norhaslinda Zainal

    2015-12-01

    A high number of premature mortality due to tobacco use has increased worldwide. Despite control policies being implemented to reduce premature mortality, the rate of smoking prevalence is still high. Moreover, tobacco issues become increasingly difficult since many aspects need to be considered simultaneously. Thus, the purpose of this paper is to present an overview of existing modelling studies on tobacco control system. The background section describes the tobacco issues and its current trends. These models have been categorised according to their modelling approaches either individual or integrated approaches. Next, a framework of modelling approaches based on the integration of multi-criteria decision making, system dynamics and nonlinear programming is proposed, expected to reduce the smoking prevalence. This framework provides guideline for modelling the interaction between smoking behaviour and its impacts, tobacco control policies and the effectiveness of each strategy in healthcare.

  11. Automated Analysis of Stateflow Models

    NASA Technical Reports Server (NTRS)

    Bourbouh, Hamza; Garoche, Pierre-Loic; Garion, Christophe; Gurfinkel, Arie; Kahsaia, Temesghen; Thirioux, Xavier

    2017-01-01

    Stateflow is a widely used modeling framework for embedded and cyber physical systems where control software interacts with physical processes. In this work, we present a framework a fully automated safety verification technique for Stateflow models. Our approach is two-folded: (i) we faithfully compile Stateflow models into hierarchical state machines, and (ii) we use automated logic-based verification engine to decide the validity of safety properties. The starting point of our approach is a denotational semantics of State flow. We propose a compilation process using continuation-passing style (CPS) denotational semantics. Our compilation technique preserves the structural and modal behavior of the system. The overall approach is implemented as an open source toolbox that can be integrated into the existing Mathworks Simulink Stateflow modeling framework. We present preliminary experimental evaluations that illustrate the effectiveness of our approach in code generation and safety verification of industrial scale Stateflow models.

  12. On extending parallelism to serial simulators

    NASA Technical Reports Server (NTRS)

    Nicol, David; Heidelberger, Philip

    1994-01-01

    This paper describes an approach to discrete event simulation modeling that appears to be effective for developing portable and efficient parallel execution of models of large distributed systems and communication networks. In this approach, the modeler develops submodels using an existing sequential simulation modeling tool, using the full expressive power of the tool. A set of modeling language extensions permit automatically synchronized communication between submodels; however, the automation requires that any such communication must take a nonzero amount off simulation time. Within this modeling paradigm, a variety of conservative synchronization protocols can transparently support conservative execution of submodels on potentially different processors. A specific implementation of this approach, U.P.S. (Utilitarian Parallel Simulator), is described, along with performance results on the Intel Paragon.

  13. A Hybrid Multi-Scale Model of Crystal Plasticity for Handling Stress Concentrations

    DOE PAGES

    Sun, Shang; Ramazani, Ali; Sundararaghavan, Veera

    2017-09-04

    Microstructural effects become important at regions of stress concentrators such as notches, cracks and contact surfaces. A multiscale model is presented that efficiently captures microstructural details at such critical regions. The approach is based on a multiresolution mesh that includes an explicit microstructure representation at critical regions where stresses are localized. At regions farther away from the stress concentration, a reduced order model that statistically captures the effect of the microstructure is employed. The statistical model is based on a finite element representation of the orientation distribution function (ODF). As an illustrative example, we have applied the multiscaling method tomore » compute the stress intensity factor K I around the crack tip in a wedge-opening load specimen. The approach is verified with an analytical solution within linear elasticity approximation and is then extended to allow modeling of microstructural effects on crack tip plasticity.« less

  14. A Hybrid Multi-Scale Model of Crystal Plasticity for Handling Stress Concentrations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Shang; Ramazani, Ali; Sundararaghavan, Veera

    Microstructural effects become important at regions of stress concentrators such as notches, cracks and contact surfaces. A multiscale model is presented that efficiently captures microstructural details at such critical regions. The approach is based on a multiresolution mesh that includes an explicit microstructure representation at critical regions where stresses are localized. At regions farther away from the stress concentration, a reduced order model that statistically captures the effect of the microstructure is employed. The statistical model is based on a finite element representation of the orientation distribution function (ODF). As an illustrative example, we have applied the multiscaling method tomore » compute the stress intensity factor K I around the crack tip in a wedge-opening load specimen. The approach is verified with an analytical solution within linear elasticity approximation and is then extended to allow modeling of microstructural effects on crack tip plasticity.« less

  15. A Capability-Based Approach to Analyzing the Effectiveness and Robustness of an Offshore Patrol Vessel in the Search and Rescue Mission

    DTIC Science & Technology

    2012-12-01

    13 D. NPSS MODEL DEVELOPMENT...COMPARISON: NPSS MODEL WITH ITALIAN MODEL ...................51 IV. CONCLUSIONS AND RECOMMENDATIONS...58 1. Italian Model Recommendations ......................................................58 2. NPSS Model

  16. Selection, calibration, and validation of models of tumor growth.

    PubMed

    Lima, E A B F; Oden, J T; Hormuth, D A; Yankeelov, T E; Almeida, R C

    2016-11-01

    This paper presents general approaches for addressing some of the most important issues in predictive computational oncology concerned with developing classes of predictive models of tumor growth. First, the process of developing mathematical models of vascular tumors evolving in the complex, heterogeneous, macroenvironment of living tissue; second, the selection of the most plausible models among these classes, given relevant observational data; third, the statistical calibration and validation of models in these classes, and finally, the prediction of key Quantities of Interest (QOIs) relevant to patient survival and the effect of various therapies. The most challenging aspects of this endeavor is that all of these issues often involve confounding uncertainties: in observational data, in model parameters, in model selection, and in the features targeted in the prediction. Our approach can be referred to as "model agnostic" in that no single model is advocated; rather, a general approach that explores powerful mixture-theory representations of tissue behavior while accounting for a range of relevant biological factors is presented, which leads to many potentially predictive models. Then representative classes are identified which provide a starting point for the implementation of OPAL, the Occam Plausibility Algorithm (OPAL) which enables the modeler to select the most plausible models (for given data) and to determine if the model is a valid tool for predicting tumor growth and morphology ( in vivo ). All of these approaches account for uncertainties in the model, the observational data, the model parameters, and the target QOI. We demonstrate these processes by comparing a list of models for tumor growth, including reaction-diffusion models, phase-fields models, and models with and without mechanical deformation effects, for glioma growth measured in murine experiments. Examples are provided that exhibit quite acceptable predictions of tumor growth in laboratory animals while demonstrating successful implementations of OPAL.

  17. Economic impacts of climate change on agriculture: the AgMIP approach

    NASA Astrophysics Data System (ADS)

    Delincé, Jacques; Ciaian, Pavel; Witzke, Heinz-Peter

    2015-01-01

    The current paper investigates the long-term global impacts on crop productivity under different climate scenarios using the AgMIP approach (Agricultural Model Intercomparison and Improvement Project). The paper provides horizontal model intercomparison from 11 economic models as well as a more detailed analysis of the simulated effects from the Common Agricultural Policy Regionalized Impact (CAPRI) model to systematically compare its performance with other AgMIP models and specifically for the Chinese agriculture. CAPRI is a comparative static partial equilibrium model extensively used for medium and long-term economic and environmental policy impact applications. The results indicate that, at the global level, the climate change will cause an agricultural productivity decrease (between -2% and -15% by 2050), a food price increase (between 1.3% and 56%) and an expansion of cultivated area (between 1% and 4%) by 2050. The results for China indicate that the climate change effects tend to be smaller than the global impacts. The CAPRI-simulated effects are, in general, close to the median across all AgMIP models. Model intercomparison analyses reveal consistency in terms of direction of change to climate change but relatively strong heterogeneity in the magnitude of the effects between models.

  18. Stochastic Modeling of the Environmental Impacts of the Mingtang Tunneling Project

    NASA Astrophysics Data System (ADS)

    Li, Xiaojun; Li, Yandong; Chang, Ching-Fu; Chen, Ziyang; Tan, Benjamin Zhi Wen; Sege, Jon; Wang, Changhong; Rubin, Yoram

    2017-04-01

    This paper investigates the environmental impacts of a major tunneling project in China. Of particular interest is the drawdown of the water table, due to its potential impacts on ecosystem health and on agricultural activity. Due to scarcity of data, the study pursues a Bayesian stochastic approach, which is built around a numerical model. We adopted the Bayesian approach with the goal of deriving the posterior distributions of the dependent variables conditional on local data. The choice of the Bayesian approach for this study is somewhat non-trivial because of the scarcity of in-situ measurements. The thought guiding this selection is that prior distributions for the model input variables are valuable tools even if that all inputs are available, the Bayesian approach could provide a good starting point for further updates as and if additional data becomes available. To construct effective priors, a systematic approach was developed and implemented for constructing informative priors based on other, well-documented sites which bear geological and hydrological similarity to the target site (the Mingtang tunneling project). The approach is built around two classes of similarity criteria: a physically-based set of criteria and an additional set covering epistemic criteria. The prior construction strategy was implemented for the hydraulic conductivity of various types of rocks at the site (Granite and Gneiss) and for modeling the geometry and conductivity of the fault zones. Additional elements of our strategy include (1) modeling the water table through bounding surfaces representing upper and lower limits, and (2) modeling the effective conductivity as a random variable (varying between realizations, not in space). The approach was tested successfully against its ability to predict the tunnel infiltration fluxes and against observations of drying soils.

  19. Dynamic simulation of storm-driven barrier island morphology under future sea level rise

    NASA Astrophysics Data System (ADS)

    Passeri, D. L.; Long, J.; Plant, N. G.; Bilskie, M. V.; Hagen, S. C.

    2016-12-01

    The impacts of short-term processes such as tropical and extratropical storms have the potential to alter barrier island morphology. On the event scale, the effects of storm-driven morphology may result in damage or loss of property, infrastructure and habitat. On the decadal scale, the combination of storms and sea level rise (SLR) will evolve barrier islands. The effects of SLR on hydrodynamics and coastal morphology are dynamic and inter-related; nonlinearities in SLR can cause larger peak surges, lengthier inundation times and additional inundated land, which may result in increased erosion, overwash or breaching along barrier islands. This study uses a two-dimensional morphodynamic model (XBeach) to examine the response of Dauphin Island, AL to storm surge under future SLR. The model is forced with water levels and waves provided by a large-domain hydrodynamic model. A historic validation of hurricanes Ivan and Katrina indicates the model is capable of predicting morphologic response with high skill (0.5). The validated model is used to simulate storm surge driven by Ivan and Katrina under four future SLR scenarios, ranging from 20 cm to 2 m. Each SLR scenario is implemented using a static or "bathtub" approach (in which water levels are increased linearly by the amount of SLR) versus a dynamic approach (in which SLR is applied at the open ocean boundary of the hydrodynamic model and allowed to propagate through the domain as guided by the governing equations). Results illustrate that higher amounts of SLR result in additional shoreline change, dune erosion, overwash and breaching. Compared to the dynamic approach, the static approach over-predicts inundation, dune erosion, overwash and breaching of the island. Overall, results provide a better understanding of the effects of SLR on storm-driven barrier island morphology and support a paradigm shift away from the "bathtub" approach, towards considering the integrated, dynamic effects of SLR.

  20. Moving from Introverted to Extraverted Embedded Librarian Services: An Example of a Proactive Model

    ERIC Educational Resources Information Center

    Knight, Valerie R.; Loftis, Charissa

    2012-01-01

    Librarians at Wayne State College have developed an extraverted online embedded librarian model whereby librarians proactively push out content to students at time-appropriate moments. This article outlines why extraverted approaches are more effective than introverted approaches. It also details how to develop an extraverted program. First,…

  1. Predicting future protection of respirator users: Statistical approaches and practical implications.

    PubMed

    Hu, Chengcheng; Harber, Philip; Su, Jing

    2016-01-01

    The purpose of this article is to describe a statistical approach for predicting a respirator user's fit factor in the future based upon results from initial tests. A statistical prediction model was developed based upon joint distribution of multiple fit factor measurements over time obtained from linear mixed effect models. The model accounts for within-subject correlation as well as short-term (within one day) and longer-term variability. As an example of applying this approach, model parameters were estimated from a research study in which volunteers were trained by three different modalities to use one of two types of respirators. They underwent two quantitative fit tests at the initial session and two on the same day approximately six months later. The fitted models demonstrated correlation and gave the estimated distribution of future fit test results conditional on past results for an individual worker. This approach can be applied to establishing a criterion value for passing an initial fit test to provide reasonable likelihood that a worker will be adequately protected in the future; and to optimizing the repeat fit factor test intervals individually for each user for cost-effective testing.

  2. Coarse-grained molecular dynamics simulations of depletion-induced interactions for soft matter systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shendruk, Tyler N., E-mail: tyler.shendruk@physics.ox.ac.uk; Bertrand, Martin; Harden, James L.

    2014-12-28

    Given the ubiquity of depletion effects in biological and other soft matter systems, it is desirable to have coarse-grained Molecular Dynamics (MD) simulation approaches appropriate for the study of complex systems. This paper examines the use of two common truncated Lennard-Jones (Weeks-Chandler-Andersen (WCA)) potentials to describe a pair of colloidal particles in a thermal bath of depletants. The shifted-WCA model is the steeper of the two repulsive potentials considered, while the combinatorial-WCA model is the softer. It is found that the depletion-induced well depth for the combinatorial-WCA model is significantly deeper than the shifted-WCA model because the resulting overlap ofmore » the colloids yields extra accessible volume for depletants. For both shifted- and combinatorial-WCA simulations, the second virial coefficients and pair potentials between colloids are demonstrated to be well approximated by the Morphometric Thermodynamics (MT) model. This agreement suggests that the presence of depletants can be accurately modelled in MD simulations by implicitly including them through simple, analytical MT forms for depletion-induced interactions. Although both WCA potentials are found to be effective generic coarse-grained simulation approaches for studying depletion effects in complicated soft matter systems, combinatorial-WCA is the more efficient approach as depletion effects are enhanced at lower depletant densities. The findings indicate that for soft matter systems that are better modelled by potentials with some compressibility, predictions from hard-sphere systems could greatly underestimate the magnitude of depletion effects at a given depletant density.« less

  3. Microwave landing system modeling with application to air traffic control

    NASA Technical Reports Server (NTRS)

    Poulose, M. M.

    1991-01-01

    Compared to the current instrument landing system, the microwave landing system (MLS), which is in the advanced stage of implementation, can potentially provide significant fuel and time savings as well as more flexibility in approach and landing functions. However, the expanded coverage and increased accuracy requirements of the MLS make it more susceptible to the features of the site in which it is located. An analytical approach is presented for evaluating the multipath effects of scatterers that are commonly found in airport environments. The approach combines a multiplane model with a ray-tracing technique and a formulation for estimating the electromagnetic fields caused by the antenna array in the presence of scatterers. The model is applied to several airport scenarios. The reduced computational burden enables the scattering effects on MLS position information to be evaluated in near real time. Evaluation in near real time would permit the incorporation of the modeling scheme into air traffic control automation; it would adaptively delineate zones of reduced accuracy within the MLS coverage volume, and help establish safe approach and takeoff trajectories in the presence of uneven terrain and other scatterers.

  4. Approach to Assessing the Effects of Aerial Deposition on Water Quality in the Alberta Oil Sands Region.

    PubMed

    Dayyani, Shadi; Daly, Gillian; Vandenberg, Jerry

    2016-02-01

    Snow cover forms a porous medium that acts as a receptor for aerially deposited polycyclic aromatic hydrocarbons (PAHs) and metals. The snowpack, acting as a temporary storage reservoir, releases contaminants accumulating over the winter during a relatively short melt period. This process could result in elevated concentrations of contaminants in melt water. Recent studies in the Alberta oil sands region have documented increases in snowpack and lake sediment concentrations; however, no studies have addressed the fate and transport of contaminants during the snowmelt period. This study describes modelling approaches that were developed to assess potential effects of aerially deposited PAHs and metals to snowpack and snowmelt water concentrations. The contribution of snowmelt to freshwater PAH concentrations is assessed using a dynamic, multi-compartmental fate model, and the contribution to metal concentrations is estimated using a mass-balance approach. The modelling approaches described herein were applied to two watersheds in the Alberta oil sands region for two planned oil sands developments. Accumulation of PAHs in a lake within the deposition zone was also modelled for comparison to observed concentrations.

  5. Accounting for context in studies of health inequalities: a review and comparison of analytic approaches.

    PubMed

    Schempf, Ashley H; Kaufman, Jay S

    2012-10-01

    A common epidemiologic objective is to evaluate the contribution of residential context to individual-level disparities by race or socioeconomic position. We reviewed analytic strategies to account for the total (observed and unobserved factors) contribution of environmental context to health inequalities, including conventional fixed effects (FE) and hybrid FE implemented within a random effects (RE) or a marginal model. To illustrate results and limitations of the various analytic approaches of accounting for the total contextual component of health disparities, we used data on births nested within neighborhoods as an applied example of evaluating neighborhood confounding of racial disparities in gestational age at birth, including both a continuous and a binary outcome. Ordinary and RE models provided disparity estimates that can be substantially biased in the presence of neighborhood confounding. Both FE and hybrid FE models can account for cluster level confounding and provide disparity estimates unconfounded by neighborhood, with the latter having greater flexibility in allowing estimation of neighborhood-level effects and intercept/slope variability when implemented in a RE specification. Given the range of models that can be implemented in a hybrid approach and the frequent goal of accounting for contextual confounding, this approach should be used more often. Published by Elsevier Inc.

  6. Colorectal Cancer Deaths Attributable to Nonuse of Screening in the United States

    PubMed Central

    Meester, Reinier G.S.; Doubeni, Chyke A.; Lansdorp-Vogelaar, Iris; Goede, S.L.; Levin, Theodore R.; Quinn, Virginia P.; van Ballegooijen, Marjolein; Corley, Douglas A.; Zauber, Ann G.

    2015-01-01

    Purpose Screening is a major contributor to colorectal cancer (CRC) mortality reductions in the U.S., but is underutilized. We estimated the fraction of CRC deaths attributable to nonuse of screening to demonstrate the potential benefits from targeted interventions. Methods The established MISCAN-colon microsimulation model was used to estimate the population attributable fraction (PAF) in people aged ≥50 years. The model incorporates long-term patterns and effects of screening by age and type of screening test. PAF for 2010 was estimated using currently available data on screening uptake; PAF was also projected assuming constant future screening rates to incorporate lagged effects from past increases in screening uptake. We also computed PAF using Levin's formula to gauge how this simpler approach differs from the model-based approach. Results There were an estimated 51,500 CRC deaths in 2010, about 63% (N∼32,200) of which were attributable to non-screening. The PAF decreases slightly to 58% in 2020. Levin's approach yielded a considerably more conservative PAF of 46% (N∼23,600) for 2010. Conclusions The majority of current U.S. CRC deaths are attributable to non-screening. This underscores the potential benefits of increasing screening uptake in the population. Traditional methods of estimating PAF underestimated screening effects compared with model-based approaches. PMID:25721748

  7. Integrative rodent models for assessing male reproductive toxicity of environmental endocrine active substances

    PubMed Central

    Auger, Jacques; Eustache, Florence; Rouiller-Fabre, Virginie; Canivenc-Lavier, Marie Chantal; Livera, Gabriel

    2014-01-01

    In the present review, we first summarize the main benefits, limitations and pitfalls of conventional in vivo approaches to assessing male reproductive structures and functions in rodents in cases of endocrine active substance (EAS) exposure from the postulate that they may provide data that can be extrapolated to humans. Then, we briefly present some integrated approaches in rodents we have recently developed at the organism level. We particularly focus on the possible effects and modes of action (MOA) of these substances at low doses and in mixtures, real-life conditions and at the organ level, deciphering the precise effects and MOA on the fetal testis. It can be considered that the in vivo experimental EAS exposure of rodents remains the first choice for studies and is a necessary tool (together with the epidemiological approach) for understanding the reproductive effects and MOA of EASs, provided the pitfalls and limitations of the rodent models are known and considered. We also provide some evidence that classical rodent models may be refined for studying the multiple consequences of EAS exposure, not only on the reproductive axis but also on various hormonally regulated organs and tissues, among which several are implicated in the complex process of mammalian reproduction. Such models constitute an interesting way of approaching human exposure conditions. Finally, we show that organotypic culture models are powerful complementary tools, especially when focusing on the MOA. All these approaches have contributed in a combinatorial manner to a better understanding of the impact of EAS exposure on human reproduction. PMID:24369134

  8. Application of Artificial Intelligence for Bridge Deterioration Model.

    PubMed

    Chen, Zhang; Wu, Yangyang; Li, Li; Sun, Lijun

    2015-01-01

    The deterministic bridge deterioration model updating problem is well established in bridge management, while the traditional methods and approaches for this problem require manual intervention. An artificial-intelligence-based approach was presented to self-updated parameters of the bridge deterioration model in this paper. When new information and data are collected, a posterior distribution was constructed to describe the integrated result of historical information and the new gained information according to Bayesian theorem, which was used to update model parameters. This AI-based approach is applied to the case of updating parameters of bridge deterioration model, which is the data collected from bridges of 12 districts in Shanghai from 2004 to 2013, and the results showed that it is an accurate, effective, and satisfactory approach to deal with the problem of the parameter updating without manual intervention.

  9. Application of Artificial Intelligence for Bridge Deterioration Model

    PubMed Central

    Chen, Zhang; Wu, Yangyang; Sun, Lijun

    2015-01-01

    The deterministic bridge deterioration model updating problem is well established in bridge management, while the traditional methods and approaches for this problem require manual intervention. An artificial-intelligence-based approach was presented to self-updated parameters of the bridge deterioration model in this paper. When new information and data are collected, a posterior distribution was constructed to describe the integrated result of historical information and the new gained information according to Bayesian theorem, which was used to update model parameters. This AI-based approach is applied to the case of updating parameters of bridge deterioration model, which is the data collected from bridges of 12 districts in Shanghai from 2004 to 2013, and the results showed that it is an accurate, effective, and satisfactory approach to deal with the problem of the parameter updating without manual intervention. PMID:26601121

  10. Final Report: The Influence of Novel Behavioral Strategies in Promoting the Diffusion of Solar Energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gillingham, Kenneth; Bollinger, Bryan

    This is the final report for a systematic, evidence-based project using an unprecedented series of large-scale field experiments to examine the effectiveness and cost-effectiveness of novel approaches to reduce the soft costs of solar residential photovoltaics. The approaches were based around grassroots marketing campaigns called ‘Solarize’ campaigns, that were designed to lower costs and increase adoption of solar technology. This study quantified the effectiveness and cost-effectiveness of the Solarize programs and tested new approaches to further improve the model.

  11. A multilevel approach to modeling of porous bioceramics

    NASA Astrophysics Data System (ADS)

    Mikushina, Valentina A.; Sidorenko, Yury N.

    2015-10-01

    The paper is devoted to discussion of multiscale models of heterogeneous materials using principles. The specificity of approach considered is the using of geometrical model of composites representative volume, which must be generated with taking the materials reinforcement structure into account. In framework of such model may be considered different physical processes which have influence on the effective mechanical properties of composite, in particular, the process of damage accumulation. It is shown that such approach can be used to prediction the value of composite macroscopic ultimate strength. As an example discussed the particular problem of the study the mechanical properties of biocomposite representing porous ceramics matrix filled with cortical bones tissue.

  12. Improving Control of Tuberculosis in Low-Burden Countries: Insights from Mathematical Modeling

    PubMed Central

    White, Peter J.; Abubakar, Ibrahim

    2016-01-01

    Tuberculosis control and elimination remains a challenge for public health even in low-burden countries. New technology and novel approaches to case-finding, diagnosis, and treatment are causes for optimism but they need to be used cost-effectively. This in turn requires improved understanding of the epidemiology of TB and analysis of the effectiveness and cost-effectiveness of different interventions. We describe the contribution that mathematical modeling can make to understanding epidemiology and control of TB in different groups, guiding improved approaches to public health interventions. We emphasize that modeling is not a substitute for collecting data but rather is complementary to empirical research, helping determine what are the key questions to address to maximize the public-health impact of research, helping to plan studies, and making maximal use of available data, particularly from surveillance, and observational studies. We provide examples of how modeling and related empirical research inform policy and discuss how a combination of these approaches can be used to address current questions of key importance, including use of whole-genome sequencing, screening and treatment for latent infection, and combating drug resistance. PMID:27199896

  13. Inferring Fitness Effects from Time-Resolved Sequence Data with a Delay-Deterministic Model

    PubMed Central

    Nené, Nuno R.; Dunham, Alistair S.; Illingworth, Christopher J. R.

    2018-01-01

    A common challenge arising from the observation of an evolutionary system over time is to infer the magnitude of selection acting upon a specific genetic variant, or variants, within the population. The inference of selection may be confounded by the effects of genetic drift in a system, leading to the development of inference procedures to account for these effects. However, recent work has suggested that deterministic models of evolution may be effective in capturing the effects of selection even under complex models of demography, suggesting the more general application of deterministic approaches to inference. Responding to this literature, we here note a case in which a deterministic model of evolution may give highly misleading inferences, resulting from the nondeterministic properties of mutation in a finite population. We propose an alternative approach that acts to correct for this error, and which we denote the delay-deterministic model. Applying our model to a simple evolutionary system, we demonstrate its performance in quantifying the extent of selection acting within that system. We further consider the application of our model to sequence data from an evolutionary experiment. We outline scenarios in which our model may produce improved results for the inference of selection, noting that such situations can be easily identified via the use of a regular deterministic model. PMID:29500183

  14. Partially composite particle physics with and without supersymmetry

    NASA Astrophysics Data System (ADS)

    Kramer, Thomas A.

    Theories in which the Standard Model fields are partially compositeness provide elegant and phenomenologically viable solutions to the Hierarchy Problem. In this thesis we will study types of models from two different perspectives. We first derive an effective field theory describing the interactions of the Standard Models fields with their lightest composite partners based on two weakly coupled sectors. Technically, via the AdS/CFT correspondence, our model is dual to a highly deconstructed theory with a single warped extra-dimension. This two sector theory provides a simplified approach to the phenomenology of this important class of theories. We then use this effective field theoretic approach to study models with weak scale accidental supersymmetry. Particularly, we will investigate the possibility that the Standard Model Higgs field is a member of a composite supersymmetric sector interacting weakly with the known Standard Model fields.

  15. Comparison of CTT and Rasch-based approaches for the analysis of longitudinal Patient Reported Outcomes.

    PubMed

    Blanchin, Myriam; Hardouin, Jean-Benoit; Le Neel, Tanguy; Kubis, Gildas; Blanchard, Claire; Mirallié, Eric; Sébille, Véronique

    2011-04-15

    Health sciences frequently deal with Patient Reported Outcomes (PRO) data for the evaluation of concepts, in particular health-related quality of life, which cannot be directly measured and are often called latent variables. Two approaches are commonly used for the analysis of such data: Classical Test Theory (CTT) and Item Response Theory (IRT). Longitudinal data are often collected to analyze the evolution of an outcome over time. The most adequate strategy to analyze longitudinal latent variables, which can be either based on CTT or IRT models, remains to be identified. This strategy must take into account the latent characteristic of what PROs are intended to measure as well as the specificity of longitudinal designs. A simple and widely used IRT model is the Rasch model. The purpose of our study was to compare CTT and Rasch-based approaches to analyze longitudinal PRO data regarding type I error, power, and time effect estimation bias. Four methods were compared: the Score and Mixed models (SM) method based on the CTT approach, the Rasch and Mixed models (RM), the Plausible Values (PV), and the Longitudinal Rasch model (LRM) methods all based on the Rasch model. All methods have shown comparable results in terms of type I error, all close to 5 per cent. LRM and SM methods presented comparable power and unbiased time effect estimations, whereas RM and PV methods showed low power and biased time effect estimations. This suggests that RM and PV methods should be avoided to analyze longitudinal latent variables. Copyright © 2010 John Wiley & Sons, Ltd.

  16. Computational studies of horizontal axis wind turbines in high wind speed condition using advanced turbulence models

    NASA Astrophysics Data System (ADS)

    Benjanirat, Sarun

    Next generation horizontal-axis wind turbines (HAWTs) will operate at very high wind speeds. Existing engineering approaches for modeling the flow phenomena are based on blade element theory, and cannot adequately account for 3-D separated, unsteady flow effects. Therefore, researchers around the world are beginning to model these flows using first principles-based computational fluid dynamics (CFD) approaches. In this study, an existing first principles-based Navier-Stokes approach is being enhanced to model HAWTs at high wind speeds. The enhancements include improved grid topology, implicit time-marching algorithms, and advanced turbulence models. The advanced turbulence models include the Spalart-Allmaras one-equation model, k-epsilon, k-o and Shear Stress Transport (k-o-SST) models. These models are also integrated with detached eddy simulation (DES) models. Results are presented for a range of wind speeds, for a configuration termed National Renewable Energy Laboratory Phase VI rotor, tested at NASA Ames Research Center. Grid sensitivity studies are also presented. Additionally, effects of existing transition models on the predictions are assessed. Data presented include power/torque production, radial distribution of normal and tangential pressure forces, root bending moments, and surface pressure fields. Good agreement was obtained between the predictions and experiments for most of the conditions, particularly with the Spalart-Allmaras-DES model.

  17. Agent Based Modeling Applications for Geosciences

    NASA Astrophysics Data System (ADS)

    Stein, J. S.

    2004-12-01

    Agent-based modeling techniques have successfully been applied to systems in which complex behaviors or outcomes arise from varied interactions between individuals in the system. Each individual interacts with its environment, as well as with other individuals, by following a set of relatively simple rules. Traditionally this "bottom-up" modeling approach has been applied to problems in the fields of economics and sociology, but more recently has been introduced to various disciplines in the geosciences. This technique can help explain the origin of complex processes from a relatively simple set of rules, incorporate large and detailed datasets when they exist, and simulate the effects of extreme events on system-wide behavior. Some of the challenges associated with this modeling method include: significant computational requirements in order to keep track of thousands to millions of agents, methods and strategies of model validation are lacking, as is a formal methodology for evaluating model uncertainty. Challenges specific to the geosciences, include how to define agents that control water, contaminant fluxes, climate forcing and other physical processes and how to link these "geo-agents" into larger agent-based simulations that include social systems such as demographics economics and regulations. Effective management of limited natural resources (such as water, hydrocarbons, or land) requires an understanding of what factors influence the demand for these resources on a regional and temporal scale. Agent-based models can be used to simulate this demand across a variety of sectors under a range of conditions and determine effective and robust management policies and monitoring strategies. The recent focus on the role of biological processes in the geosciences is another example of an area that could benefit from agent-based applications. A typical approach to modeling the effect of biological processes in geologic media has been to represent these processes in a thermodynamic framework as a set of reactions that roll-up the integrated effect that diverse biological communities exert on a geological system. This approach may work well to predict the effect of certain biological communities in specific environments in which experimental data is available. However, it does not further our knowledge of how the geobiological system actually functions on a micro scale. Agent-based techniques may provide a framework to explore the fundamental interactions required to explain the system-wide behavior. This presentation will present a survey of several promising applications of agent-based modeling approaches to problems in the geosciences and describe specific contributions to some of the inherent challenges facing this approach.

  18. Modeling changes in biomass composition during microwave-based alkali pretreatment of switchgrass.

    PubMed

    Keshwani, Deepak R; Cheng, Jay J

    2010-01-01

    This study used two different approaches to model changes in biomass composition during microwave-based pretreatment of switchgrass: kinetic modeling using a time-dependent rate coefficient, and a Mamdani-type fuzzy inference system. In both modeling approaches, the dielectric loss tangent of the alkali reagent and pretreatment time were used as predictors for changes in amounts of lignin, cellulose, and xylan during the pretreatment. Training and testing data sets for development and validation of the models were obtained from pretreatment experiments conducted using 1-3% w/v NaOH (sodium hydroxide) and pretreatment times ranging from 5 to 20 min. The kinetic modeling approach for lignin and xylan gave comparable results for training and testing data sets, and the differences between the predictions and experimental values were within 2%. The kinetic modeling approach for cellulose was not as effective, and the differences were within 5-7%. The time-dependent rate coefficients of the kinetic models estimated from experimental data were consistent with the heterogeneity of individual biomass components. The Mamdani-type fuzzy inference was shown to be an effective approach to model the pretreatment process and yielded predictions with less than 2% deviation from the experimental values for lignin and with less than 3% deviation from the experimental values for cellulose and xylan. The entropies of the fuzzy outputs from the Mamdani-type fuzzy inference system were calculated to quantify the uncertainty associated with the predictions. Results indicate that there is no significant difference between the entropies associated with the predictions for lignin, cellulose, and xylan. It is anticipated that these models could be used in process simulations of bioethanol production from lignocellulosic materials.

  19. Continuity-based model interfacing for plant-wide simulation: a general approach.

    PubMed

    Volcke, Eveline I P; van Loosdrecht, Mark C M; Vanrolleghem, Peter A

    2006-08-01

    In plant-wide simulation studies of wastewater treatment facilities, often existing models from different origin need to be coupled. However, as these submodels are likely to contain different state variables, their coupling is not straightforward. The continuity-based interfacing method (CBIM) provides a general framework to construct model interfaces for models of wastewater systems, taking into account conservation principles. In this contribution, the CBIM approach is applied to study the effect of sludge digestion reject water treatment with a SHARON-Anammox process on a plant-wide scale. Separate models were available for the SHARON process and for the Anammox process. The Benchmark simulation model no. 2 (BSM2) is used to simulate the behaviour of the complete WWTP including sludge digestion. The CBIM approach is followed to develop three different model interfaces. At the same time, the generally applicable CBIM approach was further refined and particular issues when coupling models in which pH is considered as a state variable, are pointed out.

  20. Application of a New Hybrid RANS/LES Modeling Paradigm to Compressible Flow

    NASA Astrophysics Data System (ADS)

    Oliver, Todd; Pederson, Clark; Haering, Sigfried; Moser, Robert

    2017-11-01

    It is well-known that traditional hybrid RANS/LES modeling approaches suffer from a number of deficiencies. These deficiencies often stem from overly simplistic blending strategies based on scalar measures of turbulence length scale and grid resolution and from use of isotropic subgrid models in LES regions. A recently developed hybrid modeling approach has shown promise in overcoming these deficiencies in incompressible flows [Haering, 2015]. In the approach, RANS/LES blending is accomplished using a hybridization parameter that is governed by an additional model transport equation and is driven to achieve equilibrium between the resolved and unresolved turbulence for the given grid. Further, the model uses an tensor eddy viscosity that is formulated to represent the effects of anisotropic grid resolution on subgrid quantities. In this work, this modeling approach is extended to compressible flows and implemented in the compressible flow solver SU2 (http://su2.stanford.edu/). We discuss both modeling and implementation challenges and show preliminary results for compressible flow test cases with smooth wall separation.

  1. Analyzing degradation data with a random effects spline regression model

    DOE PAGES

    Fugate, Michael Lynn; Hamada, Michael Scott; Weaver, Brian Phillip

    2017-03-17

    This study proposes using a random effects spline regression model to analyze degradation data. Spline regression avoids having to specify a parametric function for the true degradation of an item. A distribution for the spline regression coefficients captures the variation of the true degradation curves from item to item. We illustrate the proposed methodology with a real example using a Bayesian approach. The Bayesian approach allows prediction of degradation of a population over time and estimation of reliability is easy to perform.

  2. Impact resistance of fiber composites - Energy-absorbing mechanisms and environmental effects

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Sinclair, J. H.

    1985-01-01

    Energy absorbing mechanisms were identified by several approaches. The energy absorbing mechanisms considered are those in unidirectional composite beams subjected to impact. The approaches used include: mechanic models, statistical models, transient finite element analysis, and simple beam theory. Predicted results are correlated with experimental data from Charpy impact tests. The environmental effects on impact resistance are evaluated. Working definitions for energy absorbing and energy releasing mechanisms are proposed and a dynamic fracture progression is outlined. Possible generalizations to angle-plied laminates are described.

  3. Impact resistance of fiber composites: Energy absorbing mechanisms and environmental effects

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Sinclair, J. H.

    1983-01-01

    Energy absorbing mechanisms were identified by several approaches. The energy absorbing mechanisms considered are those in unidirectional composite beams subjected to impact. The approaches used include: mechanic models, statistical models, transient finite element analysis, and simple beam theory. Predicted results are correlated with experimental data from Charpy impact tests. The environmental effects on impact resistance are evaluated. Working definitions for energy absorbing and energy releasing mechanisms are proposed and a dynamic fracture progression is outlined. Possible generalizations to angle-plied laminates are described.

  4. Analyzing degradation data with a random effects spline regression model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fugate, Michael Lynn; Hamada, Michael Scott; Weaver, Brian Phillip

    This study proposes using a random effects spline regression model to analyze degradation data. Spline regression avoids having to specify a parametric function for the true degradation of an item. A distribution for the spline regression coefficients captures the variation of the true degradation curves from item to item. We illustrate the proposed methodology with a real example using a Bayesian approach. The Bayesian approach allows prediction of degradation of a population over time and estimation of reliability is easy to perform.

  5. The Effect of the Psychiatric Nursing Approach Based on the Tidal Model on Coping and Self-esteem in People with Alcohol Dependency: A Randomized Trial.

    PubMed

    Savaşan, Ayşegül; Çam, Olcay

    2017-06-01

    People with alcohol dependency have lower self-esteem than controls and when their alcohol use increases, their self-esteem decreases. Coping skills in alcohol related issues are predicted to reduce vulnerability to relapse. It is important to adapt care to individual needs so as to prevent a return to the cycle of alcohol use. The Tidal Model focuses on providing support and services to people who need to live a constructive life. The aim of the randomized study was to determine the effect of the psychiatric nursing approach based on the Tidal Model on coping and self-esteem in people with alcohol dependency. The study was semi-experimental in design with a control group, and was conducted on 36 individuals (18 experimental, 18 control). An experimental and a control group were formed by assigning persons to each group using the stratified randomization technique in the order in which they were admitted to hospital. The Coping Inventory (COPE) and the Coopersmith Self-Esteem Inventory (CSEI) were used as measurement instruments. The measurement instruments were applied before the application and three months after the application. In addition to routine treatment and follow-up, the psychiatric nursing approach based on the Tidal Model was applied to the experimental group in the One-to-One Sessions. The psychiatric nursing approach based on the Tidal Model is an approach which is effective in increasing the scores of people with alcohol dependency in positive reinterpretation and growth, active coping, restraint, emotional social support and planning and reducing their scores in behavioral disengagement. It was seen that self-esteem rose, but the difference from the control group did not reach significance. The psychiatric nursing approach based on the Tidal Model has an effect on people with alcohol dependency in maintaining their abstinence. The results of the study may provide practices on a theoretical basis for improving coping behaviors and self-esteem and facilitating the recovery process of alcohol dependents with implications for mental health nursing. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Estimation and impact assessment of input and parameter uncertainty in predicting groundwater flow with a fully distributed model

    NASA Astrophysics Data System (ADS)

    Touhidul Mustafa, Syed Md.; Nossent, Jiri; Ghysels, Gert; Huysmans, Marijke

    2017-04-01

    Transient numerical groundwater flow models have been used to understand and forecast groundwater flow systems under anthropogenic and climatic effects, but the reliability of the predictions is strongly influenced by different sources of uncertainty. Hence, researchers in hydrological sciences are developing and applying methods for uncertainty quantification. Nevertheless, spatially distributed flow models pose significant challenges for parameter and spatially distributed input estimation and uncertainty quantification. In this study, we present a general and flexible approach for input and parameter estimation and uncertainty analysis of groundwater models. The proposed approach combines a fully distributed groundwater flow model (MODFLOW) with the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. To avoid over-parameterization, the uncertainty of the spatially distributed model input has been represented by multipliers. The posterior distributions of these multipliers and the regular model parameters were estimated using DREAM. The proposed methodology has been applied in an overexploited aquifer in Bangladesh where groundwater pumping and recharge data are highly uncertain. The results confirm that input uncertainty does have a considerable effect on the model predictions and parameter distributions. Additionally, our approach also provides a new way to optimize the spatially distributed recharge and pumping data along with the parameter values under uncertain input conditions. It can be concluded from our approach that considering model input uncertainty along with parameter uncertainty is important for obtaining realistic model predictions and a correct estimation of the uncertainty bounds.

  7. A generalized nonlinear model-based mixed multinomial logit approach for crash data analysis.

    PubMed

    Zeng, Ziqiang; Zhu, Wenbo; Ke, Ruimin; Ash, John; Wang, Yinhai; Xu, Jiuping; Xu, Xinxin

    2017-02-01

    The mixed multinomial logit (MNL) approach, which can account for unobserved heterogeneity, is a promising unordered model that has been employed in analyzing the effect of factors contributing to crash severity. However, its basic assumption of using a linear function to explore the relationship between the probability of crash severity and its contributing factors can be violated in reality. This paper develops a generalized nonlinear model-based mixed MNL approach which is capable of capturing non-monotonic relationships by developing nonlinear predictors for the contributing factors in the context of unobserved heterogeneity. The crash data on seven Interstate freeways in Washington between January 2011 and December 2014 are collected to develop the nonlinear predictors in the model. Thirteen contributing factors in terms of traffic characteristics, roadway geometric characteristics, and weather conditions are identified to have significant mixed (fixed or random) effects on the crash density in three crash severity levels: fatal, injury, and property damage only. The proposed model is compared with the standard mixed MNL model. The comparison results suggest a slight superiority of the new approach in terms of model fit measured by the Akaike Information Criterion (12.06 percent decrease) and Bayesian Information Criterion (9.11 percent decrease). The predicted crash densities for all three levels of crash severities of the new approach are also closer (on average) to the observations than the ones predicted by the standard mixed MNL model. Finally, the significance and impacts of the contributing factors are analyzed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models.

    PubMed

    Gelfand, Lois A; MacKinnon, David P; DeRubeis, Robert J; Baraldi, Amanda N

    2016-01-01

    Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome-underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results.

  9. Practical Applications of Response-to-Intervention Research

    ERIC Educational Resources Information Center

    Griffiths, Amy-Jane; VanDerHeyden, Amanda M.; Parson, Lorien B.; Burns, Matthew K.

    2006-01-01

    Several approaches to response to intervention (RTI) described in the literature could be blended into an RTI model that would be effective in the schools. An effective RTI model should employ three fundamental variables: (a) systematic data collection to identify students in need, (b) effective implementation of interventions for adequate…

  10. Assessing Mediational Models: Testing and Interval Estimation for Indirect Effects

    ERIC Educational Resources Information Center

    Biesanz, Jeremy C.; Falk, Carl F.; Savalei, Victoria

    2010-01-01

    Theoretical models specifying indirect or mediated effects are common in the social sciences. An indirect effect exists when an independent variable's influence on the dependent variable is mediated through an intervening variable. Classic approaches to assessing such mediational hypotheses (Baron & Kenny, 1986; Sobel, 1982) have in recent years…

  11. Integrating fisheries approaches and household utility models for improved resource management.

    PubMed

    Milner-Gulland, E J

    2011-01-25

    Natural resource management is littered with cases of overexploitation and ineffectual management, leading to loss of both biodiversity and human welfare. Disciplinary boundaries stifle the search for solutions to these issues. Here, I combine the approach of management strategy evaluation, widely applied in fisheries, with household utility models from the conservation and development literature, to produce an integrated framework for evaluating the effectiveness of competing management strategies for harvested resources against a range of performance metrics. I demonstrate the strengths of this approach with a simple model, and use it to examine the effect of manager ignorance of household decisions on resource management effectiveness, and an allocation tradeoff between monitoring resource stocks to reduce observation uncertainty and monitoring users to improve compliance. I show that this integrated framework enables management assessments to consider household utility as a direct metric for system performance, and that although utility and resource stock conservation metrics are well aligned, harvest yield is a poor proxy for both, because it is a product of household allocation decisions between alternate livelihood options, rather than an end in itself. This approach has potential far beyond single-species harvesting in situations where managers are in full control; I show that the integrated approach enables a range of management intervention options to be evaluated within the same framework.

  12. Gender Role Conflict, Emotional Approach Coping, Self-Compassion and Distress in Prostate Cancer Patients: A Model of Direct and Moderating Effects.

    PubMed

    Lennon, Jennifer; Hevey, David; Kinsella, Louise

    2018-05-14

    Gender role conflict or the negative consequences of male socialization may compromise men's adjustment to prostate cancer by shaping how patients perceive and cope with their illness. Given mixed findings regarding how gender role conflict interacts with emotional approach coping to regulate distress in prostate cancer patients, the present study examined the effects of emotional approach coping, when considered alongside self-compassion, the ability to be kind and understanding of oneself. 92 prostate cancer patients completed questionnaires measuring gender role conflict, emotional approach coping, self-compassion and distress. A moderated mediation model was tested, where emotional approach coping mediated the path between gender role conflict and distress and self compassion moderated paths between (a) gender role conflict and emotional approach coping, and (b) gender role conflict and distress. Results partially supported this model with all study variables predicting distress in the expected directions. Emotional approach coping did not mediate associations between gender role conflict and distress; however, self-compassion did moderate the pathway between these variables. Results indicated that higher levels of self-compassion might protect men from distress related to emasculating aspects of the cancer experience. Further investigation is required to understand how self-compassion interacts with emotionality and subsequently influences distress in prostate cancer patients. To better understand the effectiveness of emotional approach coping in reducing distress in prostate cancer patients, it is recommended that future research accounts for the receptiveness of social environments to men's emotional displays. This article is protected by copyright. All rights reserved.

  13. Modelling approaches for relating effects of change in river flow to populations of Atlantic salmon and brown trout

    Treesearch

    John D. Armstrong; Keith H. Nislow

    2012-01-01

    Modelling approaches for relating discharge to the biology of Atlantic salmon, Salmo salar L., and brown trout, Salmo trutta L., growing in rivers are reviewed. Process-based and empirical models are set within a common framework of input of water flow and output of characteristics of fish, such as growth and survival, which relate directly to population dynamics. A...

  14. Traditional or centralized models of diabetes care: the multidisciplinary diabetes team approach.

    PubMed

    Bratcher, Christina R; Bello, Elizabeth

    2011-11-01

    Specialized diabetes care (SDC) centers utilize a multidisciplinary diabetes team to provide patients with highly individualized care. Patients at SDC centers receive their integrated diabetes care in one place--the "one-stop" approach. The components of the SDC center model are: medical care; individualized diabetes education; nutrition; exercise and lifestyle coaching; counseling; monitoring of drug effects. This model results in improved patient outcomes and reduced overall costs.

  15. Multi-Element Behaviour Support as a Model for the Delivery of a Human Rights Based Approach for Working with People with Intellectual Disabilities and Behaviours that Challenge

    ERIC Educational Resources Information Center

    Doody, Christina

    2009-01-01

    This paper demonstrates the effectiveness of the multi-element behaviour support (MEBS) model in meeting the rights of persons with intellectual disabilities and behaviours that challenge. It does this through explicitly linking the multi-element model to the guiding principles of a human rights based approach (HRBA) using a vignette to…

  16. Blended Learning Model on Hands-On Approach for In-Service Secondary School Teachers: Combination of E-Learning and Face-to-Face Discussion

    ERIC Educational Resources Information Center

    Ho, Vinh-Thang; Nakamori, Yoshiteru; Ho, Tu-Bao; Lim, Cher Ping

    2016-01-01

    The purpose of this study was to examine the effectiveness of a blended learning model on hands-on approach for in-service secondary school teachers using a quasi-experimental design. A 24-h teacher-training course using the blended learning model was administered to 117 teachers, while face-to-face instruction was given to 60 teachers. The…

  17. The Effect of Delamination on Damage Path and Failure Load Prediction for Notched Composite Laminates

    NASA Technical Reports Server (NTRS)

    Satyanarayana, Arunkumar; Bogert, Philip B.; Chunchu, Prasad B.

    2007-01-01

    The influence of delamination on the progressing damage path and initial failure load in composite laminates is investigated. Results are presented from a numerical and an experimental study of center-notched tensile-loaded coupons. The numerical study includes two approaches. The first approach considers only intralaminar (fiber breakage and matrix cracking) damage modes in calculating the progression of the damage path. In the second approach, the model is extended to consider the effect of interlaminar (delamination) damage modes in addition to the intralaminar damage modes. The intralaminar damage is modeled using progressive damage analysis (PDA) methodology implemented with the VUMAT subroutine in the ABAQUS finite element code. The interlaminar damage mode has been simulated using cohesive elements in ABAQUS. In the experimental study, 2-3 specimens each of two different stacking sequences of center-notched laminates are tensile loaded. The numerical results from the two different modeling approaches are compared with each other and the experimentally observed results for both laminate types. The comparisons reveal that the second modeling approach, where the delamination damage mode is included together with the intralaminar damage modes, better simulates the experimentally observed damage modes and damage paths, which were characterized by splitting failures perpendicular to the notch tips in one or more layers. Additionally, the inclusion of the delamination mode resulted in a better prediction of the loads at which the failure took place, which were higher than those predicted by the first modeling approach which did not include delaminations.

  18. Physiology-based modelling approaches to characterize fish habitat suitability: Their usefulness and limitations

    NASA Astrophysics Data System (ADS)

    Teal, Lorna R.; Marras, Stefano; Peck, Myron A.; Domenici, Paolo

    2018-02-01

    Models are useful tools for predicting the impact of global change on species distribution and abundance. As ectotherms, fish are being challenged to adapt or track changes in their environment, either in time through a phenological shift or in space by a biogeographic shift. Past modelling efforts have largely been based on correlative Species Distribution Models, which use known occurrences of species across landscapes of interest to define sets of conditions under which species are likely to maintain populations. The practical advantages of this correlative approach are its simplicity and the flexibility in terms of data requirements. However, effective conservation management requires models that make projections beyond the range of available data. One way to deal with such an extrapolation is to use a mechanistic approach based on physiological processes underlying climate change effects on organisms. Here we illustrate two approaches for developing physiology-based models to characterize fish habitat suitability. (i) Aerobic Scope Models (ASM) are based on the relationship between environmental factors and aerobic scope (defined as the difference between maximum and standard (basal) metabolism). This approach is based on experimental data collected by using a number of treatments that allow a function to be derived to predict aerobic metabolic scope from the stressor/environmental factor(s). This function is then integrated with environmental (oceanographic) data of current and future scenarios. For any given species, this approach allows habitat suitability maps to be generated at various spatiotemporal scales. The strength of the ASM approach relies on the estimate of relative performance when comparing, for example, different locations or different species. (ii) Dynamic Energy Budget (DEB) models are based on first principles including the idea that metabolism is organised in the same way within all animals. The (standard) DEB model aims to describe empirical relationships which can be found consistently within physiological data across the animal kingdom. The advantages of the DEB models are that they make use of the generalities found in terms of animal physiology and can therefore be applied to species for which little data or empirical observations are available. In addition, the limitations as well as useful potential refinements of these and other physiology-based modelling approaches are discussed. Inclusion of the physiological response of various life stages and modelling the patterns of extreme events observed in nature are suggested for future work.

  19. A novel single-parameter approach for forecasting algal blooms.

    PubMed

    Xiao, Xi; He, Junyu; Huang, Haomin; Miller, Todd R; Christakos, George; Reichwaldt, Elke S; Ghadouani, Anas; Lin, Shengpan; Xu, Xinhua; Shi, Jiyan

    2017-01-01

    Harmful algal blooms frequently occur globally, and forecasting could constitute an essential proactive strategy for bloom control. To decrease the cost of aquatic environmental monitoring and increase the accuracy of bloom forecasting, a novel single-parameter approach combining wavelet analysis with artificial neural networks (WNN) was developed and verified based on daily online monitoring datasets of algal density in the Siling Reservoir, China and Lake Winnebago, U.S.A. Firstly, a detailed modeling process was illustrated using the forecasting of cyanobacterial cell density in the Chinese reservoir as an example. Three WNN models occupying various prediction time intervals were optimized through model training using an early stopped training approach. All models performed well in fitting historical data and predicting the dynamics of cyanobacterial cell density, with the best model predicting cyanobacteria density one-day ahead (r = 0.986 and mean absolute error = 0.103 × 10 4  cells mL -1 ). Secondly, the potential of this novel approach was further confirmed by the precise predictions of algal biomass dynamics measured as chl a in both study sites, demonstrating its high performance in forecasting algal blooms, including cyanobacteria as well as other blooming species. Thirdly, the WNN model was compared to current algal forecasting methods (i.e. artificial neural networks, autoregressive integrated moving average model), and was found to be more accurate. In addition, the application of this novel single-parameter approach is cost effective as it requires only a buoy-mounted fluorescent probe, which is merely a fraction (∼15%) of the cost of a typical auto-monitoring system. As such, the newly developed approach presents a promising and cost-effective tool for the future prediction and management of harmful algal blooms. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Improving Argumentative Writing: Effects of a Blended Learning Approach and Gamification

    ERIC Educational Resources Information Center

    Lam, Yau Wai; Hew, Khe Foon; Chiu, Kin Fung

    2018-01-01

    This study investigated the effectiveness of a blended learning approach--involving the thesis, analysis, and synthesis key (TASK) procedural strategy; online Edmodo discussions; online message labels; and writing models--on student argumentative writing in a Hong Kong secondary school. It also examined whether the application of digital game…

  1. Addressing Complex Challenges through Adaptive Leadership: A Promising Approach to Collaborative Problem Solving

    ERIC Educational Resources Information Center

    Nelson, Tenneisha; Squires, Vicki

    2017-01-01

    Organizations are faced with solving increasingly complex problems. Addressing these issues requires effective leadership that can facilitate a collaborative problem solving approach where multiple perspectives are leveraged. In this conceptual paper, we critique the effectiveness of earlier leadership models in tackling complex organizational…

  2. Cognitive Development Effects of Teaching Probabilistic Decision Making to Middle School Students

    ERIC Educational Resources Information Center

    Mjelde, James W.; Litzenberg, Kerry K.; Lindner, James R.

    2011-01-01

    This study investigated the comprehension and effectiveness of teaching formal, probabilistic decision-making skills to middle school students. Two specific objectives were to determine (1) if middle school students can comprehend a probabilistic decision-making approach, and (2) if exposure to the modeling approaches improves middle school…

  3. Investigating the Relationship among Extracurricular Activities, Learning Approach and Academic Outcomes: A Case Study

    ERIC Educational Resources Information Center

    Chan, Yiu-Kong

    2016-01-01

    Learning effectiveness requires an understanding of the relationship among extracurricular activities, learning approach and academic performance and, it is argued, this helps educators develop techniques designed to enrich learning effectiveness. Biggs' Presage-Process-Product model on student learning has identified the relationship among…

  4. Organizational Effectiveness in Libraries: A Review and Some Suggestions.

    ERIC Educational Resources Information Center

    Aversa, Elizabeth

    1981-01-01

    Reviews some approaches to organizational effectiveness suggested by organizational theorists, reports on the applications of these theories in libraries, develops some hypotheses regarding the assessment of performance in libraries, and describes a model which synthesizes some of the approaches. A 52-item reference list is attached. (Author/JL)

  5. Stiffness characteristics of airfoils under pulse loading

    NASA Astrophysics Data System (ADS)

    Turner, Kevin Eugene

    The turbomachinery industry continually struggles with the adverse effects of contact rubs between airfoils and casings. The key parameter controlling the severity of a given rub event is the contact load produced when the airfoil tips incur into the casing. These highly non-linear and transient forces are difficult to calculate and their effects on the static and rotating components are not well understood. To help provide this insight, experimental and analytical capabilities have been established and exercised through an alliance between GE Aviation and The Ohio State University Gas Turbine Laboratory. One of the early findings of the program is the influence of blade flexibility on the physics of rub events. The core focus of the work presented in this dissertation is to quantify the influence of airfoil flexibility through a novel modeling approach that is based on the relationship between applied force duration and maximum tip deflection. This relationship is initially established using a series of forward, non-linear and transient analyses in which simulated impulse rub loads are applied. This procedure, although effective, is highly inefficient and costly to conduct by requiring numerous explicit simulations. To alleviate this issue, a simplified model, named the pulse magnification model, is developed that only requires a modal analysis and a static analyses to fully describe how the airfoil stiffness changes with respect to load duration. Results from the pulse magnification model are compared to results from the full transient simulation method and to experimental results, providing sound verification for the use of the modeling approach. Furthermore, a unique and highly efficient method to model airfoil geometries was developed and is outlined in this dissertation. This method produces quality Finite Element airfoil definitions directly from a fully parameterized mathematical model. The effectiveness of this approach is demonstrated by comparing modal properties of the simulated geometries to modal properties of various current airfoil designs. Finally, this modeling approach was used in conjunction with the pulse magnification model to study the effects of various airfoil geometric features on the stiffness of the blade under impulsive loading.

  6. Finite element simulations of the head-brain responses to the top impacts of a construction helmet: Effects of the neck and body mass.

    PubMed

    Wu, John Z; Pan, Christopher S; Wimer, Bryan M; Rosen, Charles L

    2017-01-01

    Traumatic brain injuries are among the most common severely disabling injuries in the United States. Construction helmets are considered essential personal protective equipment for reducing traumatic brain injury risks at work sites. In this study, we proposed a practical finite element modeling approach that would be suitable for engineers to optimize construction helmet design. The finite element model includes all essential anatomical structures of a human head (i.e. skin, scalp, skull, cerebrospinal fluid, brain, medulla, spinal cord, cervical vertebrae, and discs) and all major engineering components of a construction helmet (i.e. shell and suspension system). The head finite element model has been calibrated using the experimental data in the literature. It is technically difficult to precisely account for the effects of the neck and body mass on the dynamic responses, because the finite element model does not include the entire human body. An approximation approach has been developed to account for the effects of the neck and body mass on the dynamic responses of the head-brain. Using the proposed model, we have calculated the responses of the head-brain during a top impact when wearing a construction helmet. The proposed modeling approach would provide a tool to improve the helmet design on a biomechanical basis.

  7. Retrofitting and the mu Problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, Daniel; Weigand, Timo; /SLAC /Stanford U., Phys. Dept.

    2010-08-26

    One of the challenges of supersymmetry (SUSY) breaking and mediation is generating a {mu} term consistent with the requirements of electro-weak symmetry breaking. The most common approach to the problem is to generate the {mu} term through a SUSY breaking F-term. Often these models produce unacceptably large B{mu} terms as a result. We will present an alternate approach, where the {mu} term is generated directly by non-perturtative effects. The same non-perturbative effect will also retrofit the model of SUSY breaking in such a way that {mu} is at the same scale as masses of the Standard Model superpartners. Because themore » {mu} term is not directly generated by SUSY breaking effects, there is no associated B{mu} problem. These results are demonstrated in a toy model where a stringy instanton generates {mu}.« less

  8. Patient-specific bone modeling and analysis: the role of integration and automation in clinical adoption.

    PubMed

    Zadpoor, Amir A; Weinans, Harrie

    2015-03-18

    Patient-specific analysis of bones is considered an important tool for diagnosis and treatment of skeletal diseases and for clinical research aimed at understanding the etiology of skeletal diseases and the effects of different types of treatment on their progress. In this article, we discuss how integration of several important components enables accurate and cost-effective patient-specific bone analysis, focusing primarily on patient-specific finite element (FE) modeling of bones. First, the different components are briefly reviewed. Then, two important aspects of patient-specific FE modeling, namely integration of modeling components and automation of modeling approaches, are discussed. We conclude with a section on validation of patient-specific modeling results, possible applications of patient-specific modeling procedures, current limitations of the modeling approaches, and possible areas for future research. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Time-varying SMART design and data analysis methods for evaluating adaptive intervention effects.

    PubMed

    Dai, Tianjiao; Shete, Sanjay

    2016-08-30

    In a standard two-stage SMART design, the intermediate response to the first-stage intervention is measured at a fixed time point for all participants. Subsequently, responders and non-responders are re-randomized and the final outcome of interest is measured at the end of the study. To reduce the side effects and costs associated with first-stage interventions in a SMART design, we proposed a novel time-varying SMART design in which individuals are re-randomized to the second-stage interventions as soon as a pre-fixed intermediate response is observed. With this strategy, the duration of the first-stage intervention will vary. We developed a time-varying mixed effects model and a joint model that allows for modeling the outcomes of interest (intermediate and final) and the random durations of the first-stage interventions simultaneously. The joint model borrows strength from the survival sub-model in which the duration of the first-stage intervention (i.e., time to response to the first-stage intervention) is modeled. We performed a simulation study to evaluate the statistical properties of these models. Our simulation results showed that the two modeling approaches were both able to provide good estimations of the means of the final outcomes of all the embedded interventions in a SMART. However, the joint modeling approach was more accurate for estimating the coefficients of first-stage interventions and time of the intervention. We conclude that the joint modeling approach provides more accurate parameter estimates and a higher estimated coverage probability than the single time-varying mixed effects model, and we recommend the joint model for analyzing data generated from time-varying SMART designs. In addition, we showed that the proposed time-varying SMART design is cost-efficient and equally effective in selecting the optimal embedded adaptive intervention as the standard SMART design.

  10. A Poisson approach to the validation of failure time surrogate endpoints in individual patient data meta-analyses.

    PubMed

    Rotolo, Federico; Paoletti, Xavier; Burzykowski, Tomasz; Buyse, Marc; Michiels, Stefan

    2017-01-01

    Surrogate endpoints are often used in clinical trials instead of well-established hard endpoints for practical convenience. The meta-analytic approach relies on two measures of surrogacy: one at the individual level and one at the trial level. In the survival data setting, a two-step model based on copulas is commonly used. We present a new approach which employs a bivariate survival model with an individual random effect shared between the two endpoints and correlated treatment-by-trial interactions. We fit this model using auxiliary mixed Poisson models. We study via simulations the operating characteristics of this mixed Poisson approach as compared to the two-step copula approach. We illustrate the application of the methods on two individual patient data meta-analyses in gastric cancer, in the advanced setting (4069 patients from 20 randomized trials) and in the adjuvant setting (3288 patients from 14 randomized trials).

  11. A Model for Applying Lexical Approach in Teaching Russian Grammar.

    ERIC Educational Resources Information Center

    Gettys, Serafima

    The lexical approach to teaching Russian grammar is explained, an instructional sequence is outlined, and a classroom study testing the effectiveness of the approach is reported. The lexical approach draws on research on cognitive psychology, second language acquisition theory, and research on learner language. Its bases in research and its…

  12. Using Combinatorial Approach to Improve Students' Learning of the Distributive Law and Multiplicative Identities

    ERIC Educational Resources Information Center

    Tsai, Yu-Ling; Chang, Ching-Kuch

    2009-01-01

    This article reports an alternative approach, called the combinatorial model, to learning multiplicative identities, and investigates the effects of implementing results for this alternative approach. Based on realistic mathematics education theory, the new instructional materials or modules of the new approach were developed by the authors. From…

  13. Extending Antecedents of Achievement Goals: The Double-Edged Sword Effect of Social-Oriented Achievement Motive and Gender Differences

    ERIC Educational Resources Information Center

    Nie, Youyan; Liem, Gregory Arief D.

    2013-01-01

    Underpinned by the hierarchical model of approach and avoidance motivation, the study examined the differential relations of individual-oriented and social-oriented achievement motives to approach and avoidance achievement goals (mastery-approach, performance-approach, mastery-avoidance, performance-avoidance). A total of 570 Chinese high school…

  14. Receiving water quality assessment: comparison between simplified and detailed integrated urban modelling approaches.

    PubMed

    Mannina, Giorgio; Viviani, Gaspare

    2010-01-01

    Urban water quality management often requires use of numerical models allowing the evaluation of the cause-effect relationship between the input(s) (i.e. rainfall, pollutant concentrations on catchment surface and in sewer system) and the resulting water quality response. The conventional approach to the system (i.e. sewer system, wastewater treatment plant and receiving water body), considering each component separately, does not enable optimisation of the whole system. However, recent gains in understanding and modelling make it possible to represent the system as a whole and optimise its overall performance. Indeed, integrated urban drainage modelling is of growing interest for tools to cope with Water Framework Directive requirements. Two different approaches can be employed for modelling the whole urban drainage system: detailed and simplified. Each has its advantages and disadvantages. Specifically, detailed approaches can offer a higher level of reliability in the model results, but can be very time consuming from the computational point of view. Simplified approaches are faster but may lead to greater model uncertainty due to an over-simplification. To gain insight into the above problem, two different modelling approaches have been compared with respect to their uncertainty. The first urban drainage integrated model approach uses the Saint-Venant equations and the 1D advection-dispersion equations, for the quantity and for the quality aspects, respectively. The second model approach consists of the simplified reservoir model. The analysis used a parsimonious bespoke model developed in previous studies. For the uncertainty analysis, the Generalised Likelihood Uncertainty Estimation (GLUE) procedure was used. Model reliability was evaluated on the basis of capacity of globally limiting the uncertainty. Both models have a good capability to fit the experimental data, suggesting that all adopted approaches are equivalent both for quantity and quality. The detailed model approach is more robust and presents less uncertainty in terms of uncertainty bands. On the other hand, the simplified river water quality model approach shows higher uncertainty and may be unsuitable for receiving water body quality assessment.

  15. Analysis of bullwhip effect on supply chain with Q model using Hadley-Within approach

    NASA Astrophysics Data System (ADS)

    Siregar, I.; Nasution, A. A.; Matondang, N.; Persada, M. R.; Syahputri, K.

    2018-02-01

    This research held on a tapioca flour industry company that uses cassava as raw material to produce tapioca starch product. Problems that occur in this company is inaccurate planning, consequently there is a shortage of variation between the number of requests with the total supply is met, so it is necessary to do research with the formulation of the problem that is how to analyze the Bullwhip Effect on the supply chain using Q model through Hadley-Within approach so as not to disturb the product distribution system at the company. Product distribution system at the company, obtained by the number of requests. The 2015 forecast result is lower than actual demand for distributors and manufactures in 2016 with average percentage difference for Supermarket A distributor, Supermarket B and manufacturing respectively 38.24%, 89.57% and 43.11%. The occurrence of information distortion to the demand of this product can identify the existence of bullwhip effect on the supply chain. The proposed improvement to overcome the bullwhip effect is by doing inventory control policy with Q model using Hadley-Within approach.

  16. A Joint Modeling Approach for Reaction Time and Accuracy in Psycholinguistic Experiments

    ERIC Educational Resources Information Center

    Loeys, T.; Rosseel, Y.; Baten, K.

    2011-01-01

    In the psycholinguistic literature, reaction times and accuracy can be analyzed separately using mixed (logistic) effects models with crossed random effects for item and subject. Given the potential correlation between these two outcomes, a joint model for the reaction time and accuracy may provide further insight. In this paper, a Bayesian…

  17. The Robust Learning Model with a Spiral Curriculum: Implications for the Educational Effectiveness of Online Master Degree Programs

    ERIC Educational Resources Information Center

    Neumann, Yoram; Neumann, Edith; Lewis, Shelia

    2017-01-01

    This study integrated the Spiral Curriculum approach into the Robust Learning Model as part of a continuous improvement process that was designed to improve educational effectiveness and then assessed the differences between the initial and integrated models as well as the predictability of the first course in the integrated learning model on a…

  18. Revisiting Fixed- and Random-Effects Models: Some Considerations for Policy-Relevant Education Research

    ERIC Educational Resources Information Center

    Clarke, Paul; Crawford, Claire; Steele, Fiona; Vignoles, Anna

    2015-01-01

    The use of fixed (FE) and random effects (RE) in two-level hierarchical linear regression is discussed in the context of education research. We compare the robustness of FE models with the modelling flexibility and potential efficiency of those from RE models. We argue that the two should be seen as complementary approaches. We then compare both…

  19. ACCELERATED FAILURE TIME MODELS PROVIDE A USEFUL STATISTICAL FRAMEWORK FOR AGING RESEARCH

    PubMed Central

    Swindell, William R.

    2009-01-01

    Survivorship experiments play a central role in aging research and are performed to evaluate whether interventions alter the rate of aging and increase lifespan. The accelerated failure time (AFT) model is seldom used to analyze survivorship data, but offers a potentially useful statistical approach that is based upon the survival curve rather than the hazard function. In this study, AFT models were used to analyze data from 16 survivorship experiments that evaluated the effects of one or more genetic manipulations on mouse lifespan. Most genetic manipulations were found to have a multiplicative effect on survivorship that is independent of age and well-characterized by the AFT model “deceleration factor”. AFT model deceleration factors also provided a more intuitive measure of treatment effect than the hazard ratio, and were robust to departures from modeling assumptions. Age-dependent treatment effects, when present, were investigated using quantile regression modeling. These results provide an informative and quantitative summary of survivorship data associated with currently known long-lived mouse models. In addition, from the standpoint of aging research, these statistical approaches have appealing properties and provide valuable tools for the analysis of survivorship data. PMID:19007875

  20. Accelerated failure time models provide a useful statistical framework for aging research.

    PubMed

    Swindell, William R

    2009-03-01

    Survivorship experiments play a central role in aging research and are performed to evaluate whether interventions alter the rate of aging and increase lifespan. The accelerated failure time (AFT) model is seldom used to analyze survivorship data, but offers a potentially useful statistical approach that is based upon the survival curve rather than the hazard function. In this study, AFT models were used to analyze data from 16 survivorship experiments that evaluated the effects of one or more genetic manipulations on mouse lifespan. Most genetic manipulations were found to have a multiplicative effect on survivorship that is independent of age and well-characterized by the AFT model "deceleration factor". AFT model deceleration factors also provided a more intuitive measure of treatment effect than the hazard ratio, and were robust to departures from modeling assumptions. Age-dependent treatment effects, when present, were investigated using quantile regression modeling. These results provide an informative and quantitative summary of survivorship data associated with currently known long-lived mouse models. In addition, from the standpoint of aging research, these statistical approaches have appealing properties and provide valuable tools for the analysis of survivorship data.

  1. Implementing Generalized Additive Models to Estimate the Expected Value of Sample Information in a Microsimulation Model: Results of Three Case Studies.

    PubMed

    Rabideau, Dustin J; Pei, Pamela P; Walensky, Rochelle P; Zheng, Amy; Parker, Robert A

    2018-02-01

    The expected value of sample information (EVSI) can help prioritize research but its application is hampered by computational infeasibility, especially for complex models. We investigated an approach by Strong and colleagues to estimate EVSI by applying generalized additive models (GAM) to results generated from a probabilistic sensitivity analysis (PSA). For 3 potential HIV prevention and treatment strategies, we estimated life expectancy and lifetime costs using the Cost-effectiveness of Preventing AIDS Complications (CEPAC) model, a complex patient-level microsimulation model of HIV progression. We fitted a GAM-a flexible regression model that estimates the functional form as part of the model fitting process-to the incremental net monetary benefits obtained from the CEPAC PSA. For each case study, we calculated the expected value of partial perfect information (EVPPI) using both the conventional nested Monte Carlo approach and the GAM approach. EVSI was calculated using the GAM approach. For all 3 case studies, the GAM approach consistently gave similar estimates of EVPPI compared with the conventional approach. The EVSI behaved as expected: it increased and converged to EVPPI for larger sample sizes. For each case study, generating the PSA results for the GAM approach required 3 to 4 days on a shared cluster, after which EVPPI and EVSI across a range of sample sizes were evaluated in minutes. The conventional approach required approximately 5 weeks for the EVPPI calculation alone. Estimating EVSI using the GAM approach with results from a PSA dramatically reduced the time required to conduct a computationally intense project, which would otherwise have been impractical. Using the GAM approach, we can efficiently provide policy makers with EVSI estimates, even for complex patient-level microsimulation models.

  2. "Dispersion modeling approaches for near road

    EPA Science Inventory

    Roadway design and roadside barriers can have significant effects on the dispersion of traffic-generated pollutants, especially in the near-road environment. Dispersion models that can accurately simulate these effects are needed to fully assess these impacts for a variety of app...

  3. Biologically optimized helium ion plans: calculation approach and its in vitro validation

    NASA Astrophysics Data System (ADS)

    Mairani, A.; Dokic, I.; Magro, G.; Tessonnier, T.; Kamp, F.; Carlson, D. J.; Ciocca, M.; Cerutti, F.; Sala, P. R.; Ferrari, A.; Böhlen, T. T.; Jäkel, O.; Parodi, K.; Debus, J.; Abdollahi, A.; Haberer, T.

    2016-06-01

    Treatment planning studies on the biological effect of raster-scanned helium ion beams should be performed, together with their experimental verification, before their clinical application at the Heidelberg Ion Beam Therapy Center (HIT). For this purpose, we introduce a novel calculation approach based on integrating data-driven biological models in our Monte Carlo treatment planning (MCTP) tool. Dealing with a mixed radiation field, the biological effect of the primary 4He ion beams, of the secondary 3He and 4He (Z  =  2) fragments and of the produced protons, deuterons and tritons (Z  =  1) has to be taken into account. A spread-out Bragg peak (SOBP) in water, representative of a clinically-relevant scenario, has been biologically optimized with the MCTP and then delivered at HIT. Predictions of cell survival and RBE for a tumor cell line, characterized by {{(α /β )}\\text{ph}}=5.4 Gy, have been successfully compared against measured clonogenic survival data. The mean absolute survival variation ({μΔ \\text{S}} ) between model predictions and experimental data was 5.3%  ±  0.9%. A sensitivity study, i.e. quantifying the variation of the estimations for the studied plan as a function of the applied phenomenological modelling approach, has been performed. The feasibility of a simpler biological modelling based on dose-averaged LET (linear energy transfer) has been tested. Moreover, comparisons with biophysical models such as the local effect model (LEM) and the repair-misrepair-fixation (RMF) model were performed. {μΔ \\text{S}} values for the LEM and the RMF model were, respectively, 4.5%  ±  0.8% and 5.8%  ±  1.1%. The satisfactorily agreement found in this work for the studied SOBP, representative of clinically-relevant scenario, suggests that the introduced approach could be applied for an accurate estimation of the biological effect for helium ion radiotherapy.

  4. A Comparison of Approaches for the Analysis of Interaction Effects between Latent Variables Using Partial Least Squares Path Modeling

    ERIC Educational Resources Information Center

    Henseler, Jorg; Chin, Wynne W.

    2010-01-01

    In social and business sciences, the importance of the analysis of interaction effects between manifest as well as latent variables steadily increases. Researchers using partial least squares (PLS) to analyze interaction effects between latent variables need an overview of the available approaches as well as their suitability. This article…

  5. Decompensation: A Novel Approach to Accounting for Stress Arising From the Effects of Ideology and Social Norms.

    PubMed

    Riggs, Damien W; Treharne, Gareth J

    2017-01-01

    To date, research that has drawn on Meyer's (2003) minority stress model has largely taken for granted the premises underpinning it. In this article we provide a close reading of how "stress" is conceptualized in the model and suggest that aspects of the model do not attend to the institutionalized nature of stressors experienced by people with marginalized identities, particularly lesbian, gay, bisexual, and transgender individuals. As a counter to this, we highlight the importance of a focus on the effects of ideology and social norms in terms of stress, and we argue why an intersectional approach is necessary to ensure recognition of multiple axes of marginalization and privilege. The article then outlines the concept of decompensation and suggests that it may offer one way to reconsider the effects of ideology and social norms. The decompensation approach centers on the need for social change rather than solely relying on individuals to be resilient.

  6. A metabolomics and mouse models approach to study inflammatory and immune responses to radiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fornace, Albert J.; Li, Henghong

    2013-12-02

    The three-year project entitled "A Metabolomics and Mouse Models Approach to Study Inflammatory and Immune Responses to Radiation" was initiated in September 2009. The overall objectives of this project were to investigate the acute and persistent effects of low dose radiation on T cell lymphocyte function and physiology, as well the contributions of these cells to radiation-induced inflammatory responses. Inflammation after ionizing radiation (IR), even at low doses, may impact a variety of disease processes, including infectious disease, cardiovascular disease, cancer, and other potentially inflammatory disorders. There were three overall specific aims: 1. To investigate acute and persistent effects ofmore » low dose radiation on T cell subsets and function; 2. A genetic approach with mouse models to investigate p38 MAPK pathways that are involved in radiation-induced inflammatory signaling; 3. To investigate the effect of radiation quality on the inflammatory response. We have completed the work proposed in these aims.« less

  7. Evaluating disease management program effectiveness: an introduction to survival analysis.

    PubMed

    Linden, Ariel; Adams, John L; Roberts, Nancy

    2004-01-01

    Currently, the most widely used method in the disease management industry for evaluating program effectiveness is the "total population approach." This model is a pretest-posttest design, with the most basic limitation being that without a control group, there may be sources of bias and/or competing extraneous confounding factors that offer plausible rationale explaining the change from baseline. Survival analysis allows for the inclusion of data from censored cases, those subjects who either "survived" the program without experiencing the event (e.g., achievement of target clinical levels, hospitalization) or left the program prematurely, due to disenrollement from the health plan or program, or were lost to follow-up. Additionally, independent variables may be included in the model to help explain the variability in the outcome measure. In order to maximize the potential of this statistical method, validity of the model and research design must be assured. This paper reviews survival analysis as an alternative, and more appropriate, approach to evaluating DM program effectiveness than the current total population approach.

  8. Determining the effects of patient casemix on length of hospital stay: a proportional hazards frailty model approach.

    PubMed

    Lee, A H; Yau, K K

    2001-01-01

    To identify factors associated with hospital length of stay (LOS) and to model variations in LOS within Diagnosis Related Groups (DRGs). A proportional hazards frailty modelling approach is proposed that accounts for patient transfers and the inherent correlation of patients clustered within hospitals. The investigation is based on patient discharge data extracted for a group of obstetrical DRGs. Application of the frailty approach has highlighted several significant factors after adjustment for patient casemix and random hospital effects. In particular, patients admitted for childbirth with private medical insurance coverage have higher risk of prolonged hospitalization compared to public patients. The determination of pertinent factors provides important information to hospital management and clinicians in assessing the risk of prolonged hospitalization. The analysis also enables the comparison of inter-hospital variations across adjacent DRGs.

  9. Optimal design of supply chain network under uncertainty environment using hybrid analytical and simulation modeling approach

    NASA Astrophysics Data System (ADS)

    Chiadamrong, N.; Piyathanavong, V.

    2017-12-01

    Models that aim to optimize the design of supply chain networks have gained more interest in the supply chain literature. Mixed-integer linear programming and discrete-event simulation are widely used for such an optimization problem. We present a hybrid approach to support decisions for supply chain network design using a combination of analytical and discrete-event simulation models. The proposed approach is based on iterative procedures until the difference between subsequent solutions satisfies the pre-determined termination criteria. The effectiveness of proposed approach is illustrated by an example, which shows closer to optimal results with much faster solving time than the results obtained from the conventional simulation-based optimization model. The efficacy of this proposed hybrid approach is promising and can be applied as a powerful tool in designing a real supply chain network. It also provides the possibility to model and solve more realistic problems, which incorporate dynamism and uncertainty.

  10. Towards a bulk approach to local interactions of hydrometeors

    NASA Astrophysics Data System (ADS)

    Baumgartner, Manuel; Spichtinger, Peter

    2018-02-01

    The growth of small cloud droplets and ice crystals is dominated by the diffusion of water vapor. Usually, Maxwell's approach to growth for isolated particles is used in describing this process. However, recent investigations show that local interactions between particles can change diffusion properties of cloud particles. In this study we develop an approach for including these local interactions into a bulk model approach. For this purpose, a simplified framework of local interaction is proposed and governing equations are derived from this setup. The new model is tested against direct simulations and incorporated into a parcel model framework. Using the parcel model, possible implications of the new model approach for clouds are investigated. The results indicate that for specific scenarios the lifetime of cloud droplets in subsaturated air may be longer (e.g., for an initially water supersaturated air parcel within a downdraft). These effects might have an impact on mixed-phase clouds, for example in terms of riming efficiencies.

  11. Development of a noise prediction model based on advanced fuzzy approaches in typical industrial workrooms.

    PubMed

    Aliabadi, Mohsen; Golmohammadi, Rostam; Khotanlou, Hassan; Mansoorizadeh, Muharram; Salarpour, Amir

    2014-01-01

    Noise prediction is considered to be the best method for evaluating cost-preventative noise controls in industrial workrooms. One of the most important issues is the development of accurate models for analysis of the complex relationships among acoustic features affecting noise level in workrooms. In this study, advanced fuzzy approaches were employed to develop relatively accurate models for predicting noise in noisy industrial workrooms. The data were collected from 60 industrial embroidery workrooms in the Khorasan Province, East of Iran. The main acoustic and embroidery process features that influence the noise were used to develop prediction models using MATLAB software. Multiple regression technique was also employed and its results were compared with those of fuzzy approaches. Prediction errors of all prediction models based on fuzzy approaches were within the acceptable level (lower than one dB). However, Neuro-fuzzy model (RMSE=0.53dB and R2=0.88) could slightly improve the accuracy of noise prediction compared with generate fuzzy model. Moreover, fuzzy approaches provided more accurate predictions than did regression technique. The developed models based on fuzzy approaches as useful prediction tools give professionals the opportunity to have an optimum decision about the effectiveness of acoustic treatment scenarios in embroidery workrooms.

  12. Two-condition within-participant statistical mediation analysis: A path-analytic framework.

    PubMed

    Montoya, Amanda K; Hayes, Andrew F

    2017-03-01

    Researchers interested in testing mediation often use designs where participants are measured on a dependent variable Y and a mediator M in both of 2 different circumstances. The dominant approach to assessing mediation in such a design, proposed by Judd, Kenny, and McClelland (2001), relies on a series of hypothesis tests about components of the mediation model and is not based on an estimate of or formal inference about the indirect effect. In this article we recast Judd et al.'s approach in the path-analytic framework that is now commonly used in between-participant mediation analysis. By so doing, it is apparent how to estimate the indirect effect of a within-participant manipulation on some outcome through a mediator as the product of paths of influence. This path-analytic approach eliminates the need for discrete hypothesis tests about components of the model to support a claim of mediation, as Judd et al.'s method requires, because it relies only on an inference about the product of paths-the indirect effect. We generalize methods of inference for the indirect effect widely used in between-participant designs to this within-participant version of mediation analysis, including bootstrap confidence intervals and Monte Carlo confidence intervals. Using this path-analytic approach, we extend the method to models with multiple mediators operating in parallel and serially and discuss the comparison of indirect effects in these more complex models. We offer macros and code for SPSS, SAS, and Mplus that conduct these analyses. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  13. Control structural interaction testbed: A model for multiple flexible body verification

    NASA Technical Reports Server (NTRS)

    Chory, M. A.; Cohen, A. L.; Manning, R. A.; Narigon, M. L.; Spector, V. A.

    1993-01-01

    Conventional end-to-end ground tests for verification of control system performance become increasingly complicated with the development of large, multiple flexible body spacecraft structures. The expense of accurately reproducing the on-orbit dynamic environment and the attendant difficulties in reducing and accounting for ground test effects limits the value of these tests. TRW has developed a building block approach whereby a combination of analysis, simulation, and test has replaced end-to-end performance verification by ground test. Tests are performed at the component, subsystem, and system level on engineering testbeds. These tests are aimed at authenticating models to be used in end-to-end performance verification simulations: component and subassembly engineering tests and analyses establish models and critical parameters, unit level engineering and acceptance tests refine models, and subsystem level tests confirm the models' overall behavior. The Precision Control of Agile Spacecraft (PCAS) project has developed a control structural interaction testbed with a multibody flexible structure to investigate new methods of precision control. This testbed is a model for TRW's approach to verifying control system performance. This approach has several advantages: (1) no allocation for test measurement errors is required, increasing flight hardware design allocations; (2) the approach permits greater latitude in investigating off-nominal conditions and parametric sensitivities; and (3) the simulation approach is cost effective, because the investment is in understanding the root behavior of the flight hardware and not in the ground test equipment and environment.

  14. Logistic random effects regression models: a comparison of statistical packages for binary and ordinal outcomes

    PubMed Central

    2011-01-01

    Background Logistic random effects models are a popular tool to analyze multilevel also called hierarchical data with a binary or ordinal outcome. Here, we aim to compare different statistical software implementations of these models. Methods We used individual patient data from 8509 patients in 231 centers with moderate and severe Traumatic Brain Injury (TBI) enrolled in eight Randomized Controlled Trials (RCTs) and three observational studies. We fitted logistic random effects regression models with the 5-point Glasgow Outcome Scale (GOS) as outcome, both dichotomized as well as ordinal, with center and/or trial as random effects, and as covariates age, motor score, pupil reactivity or trial. We then compared the implementations of frequentist and Bayesian methods to estimate the fixed and random effects. Frequentist approaches included R (lme4), Stata (GLLAMM), SAS (GLIMMIX and NLMIXED), MLwiN ([R]IGLS) and MIXOR, Bayesian approaches included WinBUGS, MLwiN (MCMC), R package MCMCglmm and SAS experimental procedure MCMC. Three data sets (the full data set and two sub-datasets) were analysed using basically two logistic random effects models with either one random effect for the center or two random effects for center and trial. For the ordinal outcome in the full data set also a proportional odds model with a random center effect was fitted. Results The packages gave similar parameter estimates for both the fixed and random effects and for the binary (and ordinal) models for the main study and when based on a relatively large number of level-1 (patient level) data compared to the number of level-2 (hospital level) data. However, when based on relatively sparse data set, i.e. when the numbers of level-1 and level-2 data units were about the same, the frequentist and Bayesian approaches showed somewhat different results. The software implementations differ considerably in flexibility, computation time, and usability. There are also differences in the availability of additional tools for model evaluation, such as diagnostic plots. The experimental SAS (version 9.2) procedure MCMC appeared to be inefficient. Conclusions On relatively large data sets, the different software implementations of logistic random effects regression models produced similar results. Thus, for a large data set there seems to be no explicit preference (of course if there is no preference from a philosophical point of view) for either a frequentist or Bayesian approach (if based on vague priors). The choice for a particular implementation may largely depend on the desired flexibility, and the usability of the package. For small data sets the random effects variances are difficult to estimate. In the frequentist approaches the MLE of this variance was often estimated zero with a standard error that is either zero or could not be determined, while for Bayesian methods the estimates could depend on the chosen "non-informative" prior of the variance parameter. The starting value for the variance parameter may be also critical for the convergence of the Markov chain. PMID:21605357

  15. Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.

    PubMed

    Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P

    2017-03-01

    The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Simulation of Grouting Process in Rock Masses Under a Dam Foundation Characterized by a 3D Fracture Network

    NASA Astrophysics Data System (ADS)

    Deng, Shaohui; Wang, Xiaoling; Yu, Jia; Zhang, Yichi; Liu, Zhen; Zhu, Yushan

    2018-06-01

    Grouting plays a crucial role in dam safety. Due to the concealment of grouting activities, complexity of fracture distribution in rock masses and rheological properties of cement grout, it is difficult to analyze the effects of grouting. In this paper, a computational fluid dynamics (CFD) simulation approach of dam foundation grouting based on a 3D fracture network model is proposed. In this approach, the 3D fracture network model, which is based on an improved bootstrap sampling method and established by VisualGeo software, can provide a reliable and accurate geometric model for CFD simulation of dam foundation grouting. Based on the model, a CFD simulation is performed, in which the Papanastasiou regularized model is used to express the grout rheological properties, and the volume of fluid technique is utilized to capture the grout fronts. Two sets of tests are performed to verify the effectiveness of the Papanastasiou regularized model. When applying the CFD simulation approach for dam foundation grouting, three technical issues can be solved: (1) collapsing potential of the fracture samples, (2) inconsistencies in the geometric model in actual fractures under complex geological conditions, and (3) inappropriate method of characterizing the rheological properties of cement grout. The applicability of the proposed approach is demonstrated by an illustrative case study—a hydropower station dam foundation in southwestern China.

  17. Pharmacotherapy in Generalized Anxiety Disorder: Novel Experimental Medicine Models and Emerging Drug Targets.

    PubMed

    Baldwin, David S; Hou, Ruihua; Gordon, Robert; Huneke, Nathan T M; Garner, Matthew

    2017-04-01

    Many pharmacological and psychological approaches have been found efficacious in patients with generalized anxiety disorder (GAD), but many treatment-seeking patients will not respond and others will relapse despite continuing with interventions that initially had beneficial effects. Other patients will respond but then stop treatment early because of untoward effects such as sexual dysfunction, drowsiness, and weight gain. There is much scope for the development of novel approaches that could have greater overall effectiveness or acceptability than currently available interventions or that have particular effectiveness in specific clinical subgroups. 'Experimental medicine' studies in healthy volunteers model disease states and represent a proof-of-concept approach for the development of novel therapeutic interventions: they determine whether to proceed to pivotal efficacy studies and so can reduce delays in translating innovations into clinical practice. Investigations in healthy volunteers challenged with the inhalation of air 'enriched' with 7.5% carbon dioxide (CO 2 ) indicate this technique provides a validated and robust experimental medicine model, mirroring the subjective, autonomic, and cognitive features of GAD. The anxiety response during CO 2 challenge probably involves both central noradrenergic neurotransmission and effects on acid-base sensitive receptors and so may stimulate development of novel agents targeted at central chemosensors. Increasing awareness of the potential role of altered cytokine balance in anxiety and the interplay of cytokines with monoaminergic mechanisms may also encourage the investigation of novel agents with modulating effects on immunological profiles. Although seemingly disparate, these two approaches to treatment development may pivot on a shared mechanism in exerting anxiolytic-like effects through pharmacological effects on acid-sensing ion channels.

  18. Combined acute ecotoxicity of malathion and deltamethrin to Daphnia magna (Crustacea, Cladocera): comparison of different data analysis approaches.

    PubMed

    Toumi, Héla; Boumaiza, Moncef; Millet, Maurice; Radetski, Claudemir Marcos; Camara, Baba Issa; Felten, Vincent; Masfaraud, Jean-François; Férard, Jean-François

    2018-04-19

    We studied the combined acute effect (i.e., after 48 h) of deltamethrin (a pyrethroid insecticide) and malathion (an organophosphate insecticide) on Daphnia magna. Two approaches were used to examine the potential interaction effects of eight mixtures of deltamethrin and malathion: (i) calculation of mixture toxicity index (MTI) and safety factor index (SFI) and (ii) response surface methodology coupled with isobole-based statistical model (using generalized linear model). According to the calculation of MTI and SFI, one tested mixture was found additive while the two other tested mixtures were found no additive (MTI) or antagonistic (SFI), but these differences between index responses are only due to differences in terminology related to these two indexes. Through the surface response approach and isobologram analysis, we concluded that there was a significant antagonistic effect of the binary mixtures of deltamethrin and malathion that occurs on D. magna immobilization, after 48 h of exposure. Index approaches and surface response approach with isobologram analysis are complementary. Calculation of mixture toxicity index and safety factor index allows identifying punctually the type of interaction for several tested mixtures, while the surface response approach with isobologram analysis integrates all the data providing a global outcome about the type of interactive effect. Only the surface response approach and isobologram analysis allowed the statistical assessment of the ecotoxicological interaction. Nevertheless, we recommend the use of both approaches (i) to identify the combined effects of contaminants and (ii) to improve risk assessment and environmental management.

  19. Modelling heterogeneity variances in multiple treatment comparison meta-analysis – Are informative priors the better solution?

    PubMed Central

    2013-01-01

    Background Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the ‘common variance’ assumption). This approach ‘borrows strength’ for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. Methods In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. Results In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment rankings were similar among the novel approaches, but considerably different when compared with the homogenous variance approach. Conclusions MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice. PMID:23311298

  20. Simplified models vs. effective field theory approaches in dark matter searches

    NASA Astrophysics Data System (ADS)

    De Simone, Andrea; Jacques, Thomas

    2016-07-01

    In this review we discuss and compare the usage of simplified models and Effective Field Theory (EFT) approaches in dark matter searches. We provide a state of the art description on the subject of EFTs and simplified models, especially in the context of collider searches for dark matter, but also with implications for direct and indirect detection searches, with the aim of constituting a common language for future comparisons between different strategies. The material is presented in a form that is as self-contained as possible, so that it may serve as an introductory review for the newcomer as well as a reference guide for the practitioner.

  1. A Five-Stage Prediction-Observation-Explanation Inquiry-Based Learning Model to Improve Students' Learning Performance in Science Courses

    ERIC Educational Resources Information Center

    Hsiao, Hsien-Sheng; Chen, Jyun-Chen; Hong, Jon-Chao; Chen, Po-Hsi; Lu, Chow-Chin; Chen, Sherry Y.

    2017-01-01

    A five-stage prediction-observation-explanation inquiry-based learning (FPOEIL) model was developed to improve students' scientific learning performance. In order to intensify the science learning effect, the repertory grid technology-assisted learning (RGTL) approach and the collaborative learning (CL) approach were utilized. A quasi-experimental…

  2. Addressing Challenges in Urban Teaching, Learning and Math Using Model-Strategy-Application with Reasoning Approach in Lingustically and Culturally Diverse Classrooms

    ERIC Educational Resources Information Center

    Wu, Zhonghe; An, Shuhua

    2016-01-01

    This study examined the effects of using the Model-Strategy-Application with Reasoning Approach (MSAR) in teaching and learning mathematics in linguistically and culturally diverse elementary classrooms. Through learning mathematics via the MSAR, students from different language ability groups gained an understanding of mathematics from creating…

  3. An Advanced Multiple Alternatives Modeling Formulation for Determining Graduated Fiscal Support Strategies for Operational and Planned Educational Programs.

    ERIC Educational Resources Information Center

    Wholeben, Brent Edward

    A rationale is presented for viewing the decision-making process inherent in determining budget reductions for educational programs as most effectively modeled by a graduated funding approach. The major tenets of the graduated budget reduction approach to educational fiscal policy include the development of multiple alternative reduction plans, or…

  4. Modeling Creep Effects in Advanced SiC/SiC Composites

    NASA Technical Reports Server (NTRS)

    Lang, Jerry; DiCarlo, James

    2006-01-01

    Because advanced SiC/SiC composites are projected to be used for aerospace components with large thermal gradients at high temperatures, efforts are on-going at NASA Glenn to develop approaches for modeling the anticipated creep behavior of these materials and its subsequent effects on such key composite properties as internal residual stress, proportional limit stress, ultimate tensile strength, and rupture life. Based primarily on in-plane creep data for 2D panels, this presentation describes initial modeling progress at applied composite stresses below matrix cracking for some high performance SiC/SiC composite systems recently developed at NASA. Studies are described to develop creep and rupture models using empirical, mechanical analog, and mechanistic approaches, and to implement them into finite element codes for improved component design and life modeling

  5. A framework for multi-criteria assessment of model enhancements

    NASA Astrophysics Data System (ADS)

    Francke, Till; Foerster, Saskia; Brosinsky, Arlena; Delgado, José; Güntner, Andreas; López-Tarazón, José A.; Bronstert, Axel

    2016-04-01

    Modellers are often faced with unsatisfactory model performance for a specific setup of a hydrological model. In these cases, the modeller may try to improve the setup by addressing selected causes for the model errors (i.e. data errors, structural errors). This leads to adding certain "model enhancements" (MEs), e.g. climate data based on more monitoring stations, improved calibration data, modifications in process formulations. However, deciding on which MEs to implement remains a matter of expert knowledge, guided by some sensitivity analysis at best. When multiple MEs have been implemented, a resulting improvement in model performance is not easily attributed, especially when considering different aspects of this improvement (e.g. better performance dynamics vs. reduced bias). In this study we present an approach for comparing the effect of multiple MEs in the face of multiple improvement aspects. A stepwise selection approach and structured plots help in addressing the multidimensionality of the problem. The approach is applied to a case study, which employs the meso-scale hydrosedimentological model WASA-SED for a sub-humid catchment. The results suggest that the effect of the MEs is quite diverse, with some MEs (e.g. augmented rainfall data) cause improvements for almost all aspects, while the effect of other MEs is restricted to few aspects or even deteriorate some. These specific results may not be generalizable. However, we suggest that based on studies like this, identifying the most promising MEs to implement may be facilitated.

  6. Tiltrotor noise reduction through flight trajectory management and aircraft configuration control

    NASA Astrophysics Data System (ADS)

    Gervais, Marc

    A tiltrotor can hover, takeoff and land vertically as well as cruise at high speeds and fly long distances. Because of these unique capabilities, tiltrotors are envisioned as an aircraft that could provide a solution to the issue of airport gridlock by operating on stub runways, helipads, or from smaller regional airports. However, during an approach-to-land a tiltrotor is susceptible to radiating strong impulsive noise, in particular, Blade-Vortex Interaction noise (BVI), a phenomenon highly dependent on the vehicle's performance-state. A mathematical model was developed to predict the quasi-static performance characteristics of a tiltrotor during a converting approach in the longitudinal plane. Additionally, a neural network was designed to model the acoustic results from a flight test of the XV-15 tiltrotor as a function of the aircraft's performance parameters. The performance model was linked to the neural network to yield a combined performance/acoustic model that is capable of predicting tiltrotor noise emitted during a decelerating approach. The model was then used to study noise trends associated with different combinations of airspeed, nacelle tilt, and flight path angle. It showed that BVI noise is the dominant noise source during a descent and that its strength increases with steeper descent angles. Strong BVI noise was observed at very steep flight path angles, suggesting that the tiltrotor's high downwash prevents the wake from being pushed above the rotor, even at such steep descent angles. The model was used to study the effects of various aircraft configuration and flight trajectory parameters on the rotor inflow, which adequately captured the measured BVI noise trends. Flight path management effectively constrained the rotor inflow during a converting approach and thus limited the strength of BVI noise. The maximum deceleration was also constrained by controlling the nacelle tilt-rate during conversion. By applying these constraints, low BVI noise approaches that take into account the first-order effects of deceleration on the acoustics were systematically designed and compared to a baseline approach profile. The low-noise approaches yielded substantial noise reduction benefits on a hemisphere surrounding the aircraft and on a ground plane below the aircraft's trajectory.

  7. Field measurement of moisture-buffering model inputs for residential buildings

    DOE PAGES

    Woods, Jason; Winkler, Jon

    2016-02-05

    Moisture adsorption and desorption in building materials impact indoor humidity. This effect should be included in building-energy simulations, particularly when humidity is being investigated or controlled. Several models can calculate this moisture-buffering effect, but accurate ones require model inputs that are not always known to the user of the building-energy simulation. This research developed an empirical method to extract whole-house model inputs for the effective moisture penetration depth (EMPD) model. The experimental approach was to subject the materials in the house to a square-wave relative-humidity profile, measure all of the moisture-transfer terms (e.g., infiltration, air-conditioner condensate), and calculate the onlymore » unmeasured term—the moisture sorption into the materials. We validated this method with laboratory measurements, which we used to measure the EMPD model inputs of two houses. After deriving these inputs, we measured the humidity of the same houses during tests with realistic latent and sensible loads and demonstrated the accuracy of this approach. Furthermore, these results show that the EMPD model, when given reasonable inputs, is an accurate moisture-buffering model.« less

  8. Prediction of biochar yield from cattle manure pyrolysis via least squares support vector machine intelligent approach.

    PubMed

    Cao, Hongliang; Xin, Ya; Yuan, Qiaoxia

    2016-02-01

    To predict conveniently the biochar yield from cattle manure pyrolysis, intelligent modeling approach was introduced in this research. A traditional artificial neural networks (ANN) model and a novel least squares support vector machine (LS-SVM) model were developed. For the identification and prediction evaluation of the models, a data set with 33 experimental data was used, which were obtained using a laboratory-scale fixed bed reaction system. The results demonstrated that the intelligent modeling approach is greatly convenient and effective for the prediction of the biochar yield. In particular, the novel LS-SVM model has a more satisfying predicting performance and its robustness is better than the traditional ANN model. The introduction and application of the LS-SVM modeling method gives a successful example, which is a good reference for the modeling study of cattle manure pyrolysis process, even other similar processes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Additive Genetic Risk from Five Serotonin System Polymorphisms Interacts with Interpersonal Stress to Predict Depression

    PubMed Central

    Vrshek-Schallhorn, Suzanne; Stroud, Catherine B.; Mineka, Susan; Zinbarg, Richard E.; Adam, Emma K.; Redei, Eva E.; Hammen, Constance; Craske, Michelle G.

    2016-01-01

    Behavioral genetic research supports polygenic models of depression in which many genetic variations each contribute a small amount of risk, and prevailing diathesis-stress models suggest gene-environment interactions (GxE). Multilocus profile scores of additive risk offer an approach that is consistent with polygenic models of depression risk. In a first demonstration of this approach in a GxE predicting depression, we created an additive multilocus profile score from five serotonin system polymorphisms (one each in the genes HTR1A, HTR2A, HTR2C, and two in TPH2). Analyses focused on two forms of interpersonal stress as environmental risk factors. Using five years of longitudinal diagnostic and life stress interviews from 387 emerging young adults in the Youth Emotion Project, survival analyses show that this multilocus profile score interacts with major interpersonal stressful life events to predict major depressive episode onsets (HR = 1.815, p = .007). Simultaneously, there was a significant protective effect of the profile score without a recent event (HR = 0.83, p = .030). The GxE effect with interpersonal chronic stress was not significant (HR = 1.15, p = .165). Finally, effect sizes for genetic factors examined ignoring stress suggested such an approach could lead to overlooking or misinterpreting genetic effects. Both the GxE effect and the protective simple main effect were replicated in a sample of early adolescent girls (N = 105). We discuss potential benefits of the multilocus genetic profile score approach and caveats for future research. PMID:26595467

  10. Whole-farm models to quantify greenhouse gas emissions and their potential use for linking climate change mitigation and adaptation in temperate grassland ruminant-based farming systems.

    PubMed

    Del Prado, A; Crosson, P; Olesen, J E; Rotz, C A

    2013-06-01

    The farm level is the most appropriate scale for evaluating options for mitigating greenhouse gas (GHG) emissions, because the farm represents the unit at which management decisions in livestock production are made. To date, a number of whole farm modelling approaches have been developed to quantify GHG emissions and explore climate change mitigation strategies for livestock systems. This paper analyses the limitations and strengths of the different existing approaches for modelling GHG mitigation by considering basic model structures, approaches for simulating GHG emissions from various farm components and the sensitivity of GHG outputs and mitigation measures to different approaches. Potential challenges for linking existing models with the simulation of impacts and adaptation measures under climate change are explored along with a brief discussion of the effects on other ecosystem services.

  11. A partial Hamiltonian approach for current value Hamiltonian systems

    NASA Astrophysics Data System (ADS)

    Naz, R.; Mahomed, F. M.; Chaudhry, Azam

    2014-10-01

    We develop a partial Hamiltonian framework to obtain reductions and closed-form solutions via first integrals of current value Hamiltonian systems of ordinary differential equations (ODEs). The approach is algorithmic and applies to many state and costate variables of the current value Hamiltonian. However, we apply the method to models with one control, one state and one costate variable to illustrate its effectiveness. The current value Hamiltonian systems arise in economic growth theory and other economic models. We explain our approach with the help of a simple illustrative example and then apply it to two widely used economic growth models: the Ramsey model with a constant relative risk aversion (CRRA) utility function and Cobb Douglas technology and a one-sector AK model of endogenous growth are considered. We show that our newly developed systematic approach can be used to deduce results given in the literature and also to find new solutions.

  12. A Pareto-optimal moving average multigene genetic programming model for daily streamflow prediction

    NASA Astrophysics Data System (ADS)

    Danandeh Mehr, Ali; Kahya, Ercan

    2017-06-01

    Genetic programming (GP) is able to systematically explore alternative model structures of different accuracy and complexity from observed input and output data. The effectiveness of GP in hydrological system identification has been recognized in recent studies. However, selecting a parsimonious (accurate and simple) model from such alternatives still remains a question. This paper proposes a Pareto-optimal moving average multigene genetic programming (MA-MGGP) approach to develop a parsimonious model for single-station streamflow prediction. The three main components of the approach that take us from observed data to a validated model are: (1) data pre-processing, (2) system identification and (3) system simplification. The data pre-processing ingredient uses a simple moving average filter to diminish the lagged prediction effect of stand-alone data-driven models. The multigene ingredient of the model tends to identify the underlying nonlinear system with expressions simpler than classical monolithic GP and, eventually simplification component exploits Pareto front plot to select a parsimonious model through an interactive complexity-efficiency trade-off. The approach was tested using the daily streamflow records from a station on Senoz Stream, Turkey. Comparing to the efficiency results of stand-alone GP, MGGP, and conventional multi linear regression prediction models as benchmarks, the proposed Pareto-optimal MA-MGGP model put forward a parsimonious solution, which has a noteworthy importance of being applied in practice. In addition, the approach allows the user to enter human insight into the problem to examine evolved models and pick the best performing programs out for further analysis.

  13. Evaluation of various modelling approaches in flood routing simulation and flood area mapping

    NASA Astrophysics Data System (ADS)

    Papaioannou, George; Loukas, Athanasios; Vasiliades, Lampros; Aronica, Giuseppe

    2016-04-01

    An essential process of flood hazard analysis and mapping is the floodplain modelling. The selection of the modelling approach, especially, in complex riverine topographies such as urban and suburban areas, and ungauged watersheds may affect the accuracy of the outcomes in terms of flood depths and flood inundation area. In this study, a sensitivity analysis implemented using several hydraulic-hydrodynamic modelling approaches (1D, 2D, 1D/2D) and the effect of modelling approach on flood modelling and flood mapping was investigated. The digital terrain model (DTMs) used in this study was generated from Terrestrial Laser Scanning (TLS) point cloud data. The modelling approaches included 1-dimensional hydraulic-hydrodynamic models (1D), 2-dimensional hydraulic-hydrodynamic models (2D) and the coupled 1D/2D. The 1D hydraulic-hydrodynamic models used were: HECRAS, MIKE11, LISFLOOD, XPSTORM. The 2D hydraulic-hydrodynamic models used were: MIKE21, MIKE21FM, HECRAS (2D), XPSTORM, LISFLOOD and FLO2d. The coupled 1D/2D models employed were: HECRAS(1D/2D), MIKE11/MIKE21(MIKE FLOOD platform), MIKE11/MIKE21 FM(MIKE FLOOD platform), XPSTORM(1D/2D). The validation process of flood extent achieved with the use of 2x2 contingency tables between simulated and observed flooded area for an extreme historical flash flood event. The skill score Critical Success Index was used in the validation process. The modelling approaches have also been evaluated for simulation time and requested computing power. The methodology has been implemented in a suburban ungauged watershed of Xerias river at Volos-Greece. The results of the analysis indicate the necessity of sensitivity analysis application with the use of different hydraulic-hydrodynamic modelling approaches especially for areas with complex terrain.

  14. Modeling of nanostructured porous thermoelastic composites with surface effects

    NASA Astrophysics Data System (ADS)

    Nasedkin, A. V.; Nasedkina, A. A.; Kornievsky, A. S.

    2017-01-01

    The paper presents an integrated approach for determination of effective properties of anisotropic porous thermoelastic materials with a nanoscale stochastic porosity structure. This approach includes the effective moduli method for composite me-chanics, the simulation of representative volumes and the finite element method. In order to take into account nanoscale sizes of pores, the Gurtin-Murdoch model of surface stresses and the highly conducting interface model are used at the borders between material and pores. The general methodology for determination of effective properties of porous composites is demonstrated for a two-phase composite with special conditions for stresses and heat flux discontinuities at the phase interfaces. The mathematical statements of boundary value problems and the resulting formulas to determine the complete set of effective constants of the two-phase composites with arbitrary anisotropy and with surface properties are described; the generalized statements are formulated and the finite element approximations are given. It is shown that the homogenization procedures for porous composites with surface effects can be considered as special cases of the corresponding procedures for the two-phase composites with interphase stresses and heat fluxes if the moduli of nanoinclusions are negligibly small. These approaches have been implemented in the finite element package ANSYS for a model of porous material with cubic crystal system for various values of surface moduli, porosity and number of pores. It has been noted that the magnitude of the area of the interphase boundaries has influence on the effective moduli of the porous materials with nanosized structure.

  15. A New SEYHAN's Approach in Case of Heterogeneity of Regression Slopes in ANCOVA.

    PubMed

    Ankarali, Handan; Cangur, Sengul; Ankarali, Seyit

    2018-06-01

    In this study, when the assumptions of linearity and homogeneity of regression slopes of conventional ANCOVA are not met, a new approach named as SEYHAN has been suggested to use conventional ANCOVA instead of robust or nonlinear ANCOVA. The proposed SEYHAN's approach involves transformation of continuous covariate into categorical structure when the relationship between covariate and dependent variable is nonlinear and the regression slopes are not homogenous. A simulated data set was used to explain SEYHAN's approach. In this approach, we performed conventional ANCOVA in each subgroup which is constituted according to knot values and analysis of variance with two-factor model after MARS method was used for categorization of covariate. The first model is a simpler model than the second model that includes interaction term. Since the model with interaction effect has more subjects, the power of test also increases and the existing significant difference is revealed better. We can say that linearity and homogeneity of regression slopes are not problem for data analysis by conventional linear ANCOVA model by helping this approach. It can be used fast and efficiently for the presence of one or more covariates.

  16. Potential impacts of climate change on birds and trees of the eastern United States: newest climate scenarios and species abundance modelling techniques

    Treesearch

    L.R. Iverson; A.M. Prasad; S.N. Matthews; M.P. Peters

    2007-01-01

    Climate change is affecting an increasing number of species the world over, and evidence is mounting that these changes will continue to accelerate. There have been many studies that use a modelling approach to predict the effects of future climatic change on ecological systems, including by us (Iverson et al. 1999, Matthews et al. 2004); this modelling approach uses a...

  17. Increasing Effectiveness in Teaching Ethics to Undergraduate Business Students.

    ERIC Educational Resources Information Center

    Lampe, Marc

    1997-01-01

    Traditional approaches to teaching business ethics (philosophical analysis, moral quandaries, executive cases) may not be effective in persuading undergraduates of the importance of ethical behavior. Better techniques include values education, ethical decision-making models, analysis of ethical conflicts, and role modeling. (SK)

  18. Linking environmental effects to health impacts: a computer modelling approach for air pollution

    PubMed Central

    Mindell, J.; Barrowcliffe, R.

    2005-01-01

    Study objective and Setting: To develop a computer model, using a geographical information system (GIS), to quantify potential health effects of air pollution from a new energy from waste facility on the surrounding urban population. Design: Health impacts were included where evidence of causality is sufficiently convincing. The evidence for no threshold means that annual average increases in concentration can be used to model changes in outcome. The study combined the "contours" of additional pollutant concentrations for the new source generated by a dispersion model with a population database within a GIS, which is set up to calculate the product of the concentration increase with numbers of people exposed within each enumeration district exposure response coefficients, and the background rates of mortality and hospital admissions for several causes. Main results: The magnitude of health effects might result from the increased PM10 exposure is small—about 0.03 deaths each year in a population of 3 500 000, with 0.04 extra hospital admissions for respiratory disease. Long term exposure might bring forward 1.8–7.8 deaths in 30 years. Conclusions: This computer model is a feasible approach to estimating impacts on human health from environmental effects but sensitivity analyses are recommended. Relevance to clinical or professional practice: The availability of GIS and dispersion models on personal computers enables quantification of health effects resulting from the additional air pollution new industrial development might cause. This approach could also be used in environmental impact assessment. Care must be taken in presenting results to emphasise methodological limitations and uncertainties in the numbers. PMID:16286501

  19. Developing a model for effective leadership in healthcare: a concept mapping approach

    PubMed Central

    Hargett, Charles William; Doty, Joseph P; Hauck, Jennifer N; Webb, Allison MB; Cook, Steven H; Tsipis, Nicholas E; Neumann, Julie A; Andolsek, Kathryn M; Taylor, Dean C

    2017-01-01

    Purpose Despite increasing awareness of the importance of leadership in healthcare, our understanding of the competencies of effective leadership remains limited. We used a concept mapping approach (a blend of qualitative and quantitative analysis of group processes to produce a visual composite of the group’s ideas) to identify stakeholders’ mental model of effective healthcare leadership, clarifying the underlying structure and importance of leadership competencies. Methods Literature review, focus groups, and consensus meetings were used to derive a representative set of healthcare leadership competency statements. Study participants subsequently sorted and rank-ordered these statements based on their perceived importance in contributing to effective healthcare leadership in real-world settings. Hierarchical cluster analysis of individual sortings was used to develop a coherent model of effective leadership in healthcare. Results A diverse group of 92 faculty and trainees individually rank-sorted 33 leadership competency statements. The highest rated statements were “Acting with Personal Integrity”, “Communicating Effectively”, “Acting with Professional Ethical Values”, “Pursuing Excellence”, “Building and Maintaining Relationships”, and “Thinking Critically”. Combining the results from hierarchical cluster analysis with our qualitative data led to a healthcare leadership model based on the core principle of Patient Centeredness and the core competencies of Integrity, Teamwork, Critical Thinking, Emotional Intelligence, and Selfless Service. Conclusion Using a mixed qualitative-quantitative approach, we developed a graphical representation of a shared leadership model derived in the healthcare setting. This model may enhance learning, teaching, and patient care in this important area, as well as guide future research. PMID:29355249

  20. A Multilevel Bifactor Approach to Construct Validation of Mixed-Format Scales

    ERIC Educational Resources Information Center

    Wang, Yan; Kim, Eun Sook; Dedrick, Robert F.; Ferron, John M.; Tan, Tony

    2018-01-01

    Wording effects associated with positively and negatively worded items have been found in many scales. Such effects may threaten construct validity and introduce systematic bias in the interpretation of results. A variety of models have been applied to address wording effects, such as the correlated uniqueness model and the correlated traits and…

  1. A MIXTURE OF SEVEN ANTIANDROGENIC COMPOUNDS ELICITS ADDITIVE EFFECTS ON THE MALE RAT REPRODUCTIVE TRACT THAT CORRESPOND TO MODELED PREDICTIONS

    EPA Science Inventory

    The main objectives of this study were to: (1) determine whether dissimilar antiandrogenic compounds display additive effects when present in combination and (2) to assess the ability of modelling approaches to accurately predict these mixture effects based on data from single ch...

  2. Investigating the Relationship between Dialogue Structure and Tutoring Effectiveness: A Hidden Markov Modeling Approach

    ERIC Educational Resources Information Center

    Boyer, Kristy Elizabeth; Phillips, Robert; Ingram, Amy; Ha, Eun Young; Wallis, Michael; Vouk, Mladen; Lester, James

    2011-01-01

    Identifying effective tutorial dialogue strategies is a key issue for intelligent tutoring systems research. Human-human tutoring offers a valuable model for identifying effective tutorial strategies, but extracting them is a challenge because of the richness of human dialogue. This article addresses that challenge through a machine learning…

  3. Query Language for Location-Based Services: A Model Checking Approach

    NASA Astrophysics Data System (ADS)

    Hoareau, Christian; Satoh, Ichiro

    We present a model checking approach to the rationale, implementation, and applications of a query language for location-based services. Such query mechanisms are necessary so that users, objects, and/or services can effectively benefit from the location-awareness of their surrounding environment. The underlying data model is founded on a symbolic model of space organized in a tree structure. Once extended to a semantic model for modal logic, we regard location query processing as a model checking problem, and thus define location queries as hybrid logicbased formulas. Our approach is unique to existing research because it explores the connection between location models and query processing in ubiquitous computing systems, relies on a sound theoretical basis, and provides modal logic-based query mechanisms for expressive searches over a decentralized data structure. A prototype implementation is also presented and will be discussed.

  4. Developing an approach to effectively use super ensemble experiments for the projection of hydrological extremes under climate change

    NASA Astrophysics Data System (ADS)

    Watanabe, S.; Kim, H.; Utsumi, N.

    2017-12-01

    This study aims to develop a new approach which projects hydrology under climate change using super ensemble experiments. The use of multiple ensemble is essential for the estimation of extreme, which is a major issue in the impact assessment of climate change. Hence, the super ensemble experiments are recently conducted by some research programs. While it is necessary to use multiple ensemble, the multiple calculations of hydrological simulation for each output of ensemble simulations needs considerable calculation costs. To effectively use the super ensemble experiments, we adopt a strategy to use runoff projected by climate models directly. The general approach of hydrological projection is to conduct hydrological model simulations which include land-surface and river routing process using atmospheric boundary conditions projected by climate models as inputs. This study, on the other hand, simulates only river routing model using runoff projected by climate models. In general, the climate model output is systematically biased so that a preprocessing which corrects such bias is necessary for impact assessments. Various bias correction methods have been proposed, but, to the best of our knowledge, no method has proposed for variables other than surface meteorology. Here, we newly propose a method for utilizing the projected future runoff directly. The developed method estimates and corrects the bias based on the pseudo-observation which is a result of retrospective offline simulation. We show an application of this approach to the super ensemble experiments conducted under the program of Half a degree Additional warming, Prognosis and Projected Impacts (HAPPI). More than 400 ensemble experiments from multiple climate models are available. The results of the validation using historical simulations by HAPPI indicates that the output of this approach can effectively reproduce retrospective runoff variability. Likewise, the bias of runoff from super ensemble climate projections is corrected, and the impact of climate change on hydrologic extremes is assessed in a cost-efficient way.

  5. Semiparametric modeling: Correcting low-dimensional model error in parametric models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, Tyrus, E-mail: thb11@psu.edu; Harlim, John, E-mail: jharlim@psu.edu; Department of Meteorology, the Pennsylvania State University, 503 Walker Building, University Park, PA 16802-5013

    2016-03-01

    In this paper, a semiparametric modeling approach is introduced as a paradigm for addressing model error arising from unresolved physical phenomena. Our approach compensates for model error by learning an auxiliary dynamical model for the unknown parameters. Practically, the proposed approach consists of the following steps. Given a physics-based model and a noisy data set of historical observations, a Bayesian filtering algorithm is used to extract a time-series of the parameter values. Subsequently, the diffusion forecast algorithm is applied to the retrieved time-series in order to construct the auxiliary model for the time evolving parameters. The semiparametric forecasting algorithm consistsmore » of integrating the existing physics-based model with an ensemble of parameters sampled from the probability density function of the diffusion forecast. To specify initial conditions for the diffusion forecast, a Bayesian semiparametric filtering method that extends the Kalman-based filtering framework is introduced. In difficult test examples, which introduce chaotically and stochastically evolving hidden parameters into the Lorenz-96 model, we show that our approach can effectively compensate for model error, with forecasting skill comparable to that of the perfect model.« less

  6. A Bayesian hierarchical latent trait model for estimating rater bias and reliability in large-scale performance assessment

    PubMed Central

    2018-01-01

    We propose a novel approach to modelling rater effects in scoring-based assessment. The approach is based on a Bayesian hierarchical model and simulations from the posterior distribution. We apply it to large-scale essay assessment data over a period of 5 years. Empirical results suggest that the model provides a good fit for both the total scores and when applied to individual rubrics. We estimate the median impact of rater effects on the final grade to be ± 2 points on a 50 point scale, while 10% of essays would receive a score at least ± 5 different from their actual quality. Most of the impact is due to rater unreliability, not rater bias. PMID:29614129

  7. Development of a Dynamic Energy Budget Modeling Approach to Investigate the Effects of Temperature and Resource Limitation on Mercury Bioaccumulation in Fundulus Heteroclitus

    EPA Science Inventory

    Dynamic energy budget (DEB) theory provides a generalizable and broadly applicable framework to connect sublethal toxic effects on individuals to changes in population persistence and growth. To explore this approach, we are conducting growth and bioaccumulation studies that cont...

  8. Development of a dynamic energy budget modeling approach to investigate the effects of temperature and resource limitation on mercury bioaccumulation in Fundulus heteroclitus-presentation

    EPA Science Inventory

    Dynamic energy budget (DEB) theory provides a generalizable and broadly applicable framework to connect sublethal toxic effects on individuals to changes in population survival and growth. To explore this approach, we are conducting growth and bioaccumulation studies that contrib...

  9. Development of a dynamic energy budget modeling approach to investigate the effects of temperature and resource limitation on mercury bioaccumulation in Fundulus heteroclitus.

    EPA Science Inventory

    Dynamic energy budget (DEB) theory provides a generalizable and broadly applicable framework to connect sublethal toxic effects on individuals to changes in population survival and growth. To explore this approach, we are developing growth and bioaccumulation studies that contrib...

  10. The Effects of Kolb's Experiential Learning Model on Successful Intelligence in Secondary Agriculture Students

    ERIC Educational Resources Information Center

    Baker, Marshall A.; Robinson, J. Shane

    2016-01-01

    Experiential learning is an important pedagogical approach used in secondary agricultural education. Though anecdotal evidence supports the use of experiential learning, a paucity of empirical research exists supporting the effects of this approach when compared to a more conventional teaching method, such as direct instruction. Therefore, the…

  11. Need for Orientation, Media Uses and Gratifications, and Media Effects.

    ERIC Educational Resources Information Center

    Weaver, David

    In order to study the influence of need for orientation and media gratifications on media use and media effects in political communication, two previous surveys were studied to compare the causal modeling approach and the contingent conditions approach. In the first study, 339 personal interviews were conducted with registered voters during a…

  12. The Effects of Representations, Constructivist Approaches, and Engagement on Middle School Students' Algebraic Procedure and Conceptual Understanding

    ERIC Educational Resources Information Center

    Ross, Amanda; Willson, Victor

    2012-01-01

    This study examined the effects of types of representations, constructivist teaching approaches, and student engagement on middle school algebra students' procedural knowledge and conceptual understanding. Data gathered from 16 video lessons and algebra pretest/posttests were used to run three multilevel structural equation models. Symbolic…

  13. A Positive Approach to Good Grammar

    ERIC Educational Resources Information Center

    Kuehner, Alison V.

    2016-01-01

    Correct grammar is important for precise, accurate, academic prose, but the traditional skills-based approach to teaching grammar is not effective if the goal is good writing. The sentence-combining approach shows promise. However, sentence modeling is more likely to produce strong writing and enhance reading comprehension. Through sentence…

  14. Examining the Earnings Trajectories of Community College Students Using a Piecewise Growth Curve Modeling Approach. A CAPSEE Working Paper

    ERIC Educational Resources Information Center

    Jaggars, Shanna Smith; Xu, Di

    2015-01-01

    Policymakers have become increasingly concerned with measuring--and holding colleges accountable for--students' labor market outcomes. In this paper we introduce a piecewise growth curve approach to analyzing community college students' labor market outcomes, and we discuss how this approach differs from Mincerian and fixed-effects approaches. Our…

  15. A gunner model for an AAA tracking task with interrupted observations

    NASA Technical Reports Server (NTRS)

    Yu, C. F.; Wei, K. C.; Vikmanis, M.

    1982-01-01

    The problem of modeling a trained human operator's tracking performance in an anti-aircraft system under various display blanking conditions is discussed. The input to the gunner is the observable tracking error subjected to repeated interruptions (blanking). A simple and effective gunner model was developed. The effect of blanking on the gunner's tracking performance is approached via modeling the observer and controller gains.

  16. The choice of product indicators in latent variable interaction models: post hoc analyses.

    PubMed

    Foldnes, Njål; Hagtvet, Knut Arne

    2014-09-01

    The unconstrained product indicator (PI) approach is a simple and popular approach for modeling nonlinear effects among latent variables. This approach leaves the practitioner to choose the PIs to be included in the model, introducing arbitrariness into the modeling. In contrast to previous Monte Carlo studies, we evaluated the PI approach by 3 post hoc analyses applied to a real-world case adopted from a research effort in social psychology. The measurement design applied 3 and 4 indicators for the 2 latent 1st-order variables, leaving the researcher with a choice among more than 4,000 possible PI configurations. Sixty so-called matched-pair configurations that have been recommended in previous literature are of special interest. In the 1st post hoc analysis we estimated the interaction effect for all PI configurations, keeping the real-world sample fixed. The estimated interaction effect was substantially affected by the choice of PIs, also across matched-pair configurations. Subsequently, a post hoc Monte Carlo study was conducted, with varying sample sizes and data distributions. Convergence, bias, Type I error and power of the interaction test were investigated for each matched-pair configuration and the all-pairs configuration. Variation in estimates across matched-pair configurations for a typical sample was substantial. The choice of specific configuration significantly affected convergence and the interaction test's outcome. The all-pairs configuration performed overall better than the matched-pair configurations. A further advantage of the all-pairs over the matched-pairs approach is its unambiguity. The final study evaluates the all-pairs configuration for small sample sizes and compares it to the non-PI approach of latent moderated structural equations. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  17. Estimating Allee dynamics before they can be observed: polar bears as a case study.

    PubMed

    Molnár, Péter K; Lewis, Mark A; Derocher, Andrew E

    2014-01-01

    Allee effects are an important component in the population dynamics of numerous species. Accounting for these Allee effects in population viability analyses generally requires estimates of low-density population growth rates, but such data are unavailable for most species and particularly difficult to obtain for large mammals. Here, we present a mechanistic modeling framework that allows estimating the expected low-density growth rates under a mate-finding Allee effect before the Allee effect occurs or can be observed. The approach relies on representing the mechanisms causing the Allee effect in a process-based model, which can be parameterized and validated from data on the mechanisms rather than data on population growth. We illustrate the approach using polar bears (Ursus maritimus), and estimate their expected low-density growth by linking a mating dynamics model to a matrix projection model. The Allee threshold, defined as the population density below which growth becomes negative, is shown to depend on age-structure, sex ratio, and the life history parameters determining reproduction and survival. The Allee threshold is thus both density- and frequency-dependent. Sensitivity analyses of the Allee threshold show that different combinations of the parameters determining reproduction and survival can lead to differing Allee thresholds, even if these differing combinations imply the same stable-stage population growth rate. The approach further shows how mate-limitation can induce long transient dynamics, even in populations that eventually grow to carrying capacity. Applying the models to the overharvested low-density polar bear population of Viscount Melville Sound, Canada, shows that a mate-finding Allee effect is a plausible mechanism for slow recovery of this population. Our approach is generalizable to any mating system and life cycle, and could aid proactive management and conservation strategies, for example, by providing a priori estimates of minimum conservation targets for rare species or minimum eradication targets for pests and invasive species.

  18. Estimating Allee Dynamics before They Can Be Observed: Polar Bears as a Case Study

    PubMed Central

    Molnár, Péter K.; Lewis, Mark A.; Derocher, Andrew E.

    2014-01-01

    Allee effects are an important component in the population dynamics of numerous species. Accounting for these Allee effects in population viability analyses generally requires estimates of low-density population growth rates, but such data are unavailable for most species and particularly difficult to obtain for large mammals. Here, we present a mechanistic modeling framework that allows estimating the expected low-density growth rates under a mate-finding Allee effect before the Allee effect occurs or can be observed. The approach relies on representing the mechanisms causing the Allee effect in a process-based model, which can be parameterized and validated from data on the mechanisms rather than data on population growth. We illustrate the approach using polar bears (Ursus maritimus), and estimate their expected low-density growth by linking a mating dynamics model to a matrix projection model. The Allee threshold, defined as the population density below which growth becomes negative, is shown to depend on age-structure, sex ratio, and the life history parameters determining reproduction and survival. The Allee threshold is thus both density- and frequency-dependent. Sensitivity analyses of the Allee threshold show that different combinations of the parameters determining reproduction and survival can lead to differing Allee thresholds, even if these differing combinations imply the same stable-stage population growth rate. The approach further shows how mate-limitation can induce long transient dynamics, even in populations that eventually grow to carrying capacity. Applying the models to the overharvested low-density polar bear population of Viscount Melville Sound, Canada, shows that a mate-finding Allee effect is a plausible mechanism for slow recovery of this population. Our approach is generalizable to any mating system and life cycle, and could aid proactive management and conservation strategies, for example, by providing a priori estimates of minimum conservation targets for rare species or minimum eradication targets for pests and invasive species. PMID:24427306

  19. Community-based training and employment: an effective program for persons with traumatic brain injury.

    PubMed

    Wall, J R; Niemczura, J G; Rosenthal, M

    1998-01-01

    Occupational entry is an important issue for persons with disabilities, as many become or remain unemployed after their injury. After traumatic brain injury (TBI), individuals exhibit high unemployment rates, especially those persons with injuries of greater severity, a limited premorbid work history and/or persons from economically disadvantaged backgrounds. Vocational rehabilitation programs have been developed to improve employability. Traditional vocational rehabilitation approaches, based on integrating work skills with cognitive rehabilitation models have proven only minimally effective with TBI. The supported employment model has been demonstrated to be much more effective with this group, as has an approach that combines vocational and psychosocial skills training along with job support. Even with these generally successful approaches, the literature on vocational rehabilitation in clients from economically disadvantaged environments who are diagnosed with TBI is limited. An approach for the economically disadvantaged, which combines work skills training in a real work community along with supported employment is presented.

  20. Model averaging in the presence of structural uncertainty about treatment effects: influence on treatment decision and expected value of information.

    PubMed

    Price, Malcolm J; Welton, Nicky J; Briggs, Andrew H; Ades, A E

    2011-01-01

    Standard approaches to estimation of Markov models with data from randomized controlled trials tend either to make a judgment about which transition(s) treatments act on, or they assume that treatment has a separate effect on every transition. An alternative is to fit a series of models that assume that treatment acts on specific transitions. Investigators can then choose among alternative models using goodness-of-fit statistics. However, structural uncertainty about any chosen parameterization will remain and this may have implications for the resulting decision and the need for further research. We describe a Bayesian approach to model estimation, and model selection. Structural uncertainty about which parameterization to use is accounted for using model averaging and we developed a formula for calculating the expected value of perfect information (EVPI) in averaged models. Marginal posterior distributions are generated for each of the cost-effectiveness parameters using Markov Chain Monte Carlo simulation in WinBUGS, or Monte-Carlo simulation in Excel (Microsoft Corp., Redmond, WA). We illustrate the approach with an example of treatments for asthma using aggregate-level data from a connected network of four treatments compared in three pair-wise randomized controlled trials. The standard errors of incremental net benefit using structured models is reduced by up to eight- or ninefold compared to the unstructured models, and the expected loss attaching to decision uncertainty by factors of several hundreds. Model averaging had considerable influence on the EVPI. Alternative structural assumptions can alter the treatment decision and have an overwhelming effect on model uncertainty and expected value of information. Structural uncertainty can be accounted for by model averaging, and the EVPI can be calculated for averaged models. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  1. Including non-additive genetic effects in Bayesian methods for the prediction of genetic values based on genome-wide markers

    PubMed Central

    2011-01-01

    Background Molecular marker information is a common source to draw inferences about the relationship between genetic and phenotypic variation. Genetic effects are often modelled as additively acting marker allele effects. The true mode of biological action can, of course, be different from this plain assumption. One possibility to better understand the genetic architecture of complex traits is to include intra-locus (dominance) and inter-locus (epistasis) interaction of alleles as well as the additive genetic effects when fitting a model to a trait. Several Bayesian MCMC approaches exist for the genome-wide estimation of genetic effects with high accuracy of genetic value prediction. Including pairwise interaction for thousands of loci would probably go beyond the scope of such a sampling algorithm because then millions of effects are to be estimated simultaneously leading to months of computation time. Alternative solving strategies are required when epistasis is studied. Methods We extended a fast Bayesian method (fBayesB), which was previously proposed for a purely additive model, to include non-additive effects. The fBayesB approach was used to estimate genetic effects on the basis of simulated datasets. Different scenarios were simulated to study the loss of accuracy of prediction, if epistatic effects were not simulated but modelled and vice versa. Results If 23 QTL were simulated to cause additive and dominance effects, both fBayesB and a conventional MCMC sampler BayesB yielded similar results in terms of accuracy of genetic value prediction and bias of variance component estimation based on a model including additive and dominance effects. Applying fBayesB to data with epistasis, accuracy could be improved by 5% when all pairwise interactions were modelled as well. The accuracy decreased more than 20% if genetic variation was spread over 230 QTL. In this scenario, accuracy based on modelling only additive and dominance effects was generally superior to that of the complex model including epistatic effects. Conclusions This simulation study showed that the fBayesB approach is convenient for genetic value prediction. Jointly estimating additive and non-additive effects (especially dominance) has reasonable impact on the accuracy of prediction and the proportion of genetic variation assigned to the additive genetic source. PMID:21867519

  2. Experimental and AI-based numerical modeling of contaminant transport in porous media

    NASA Astrophysics Data System (ADS)

    Nourani, Vahid; Mousavi, Shahram; Sadikoglu, Fahreddin; Singh, Vijay P.

    2017-10-01

    This study developed a new hybrid artificial intelligence (AI)-meshless approach for modeling contaminant transport in porous media. The key innovation of the proposed approach is that both black box and physically-based models are combined for modeling contaminant transport. The effectiveness of the approach was evaluated using experimental and real world data. Artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS) were calibrated to predict temporal contaminant concentrations (CCs), and the effect of noisy and de-noised data on the model performance was evaluated. Then, considering the predicted CCs at test points (TPs, in experimental study) and piezometers (in Myandoab plain) as interior conditions, the multiquadric radial basis function (MQ-RBF), as a meshless approach which solves partial differential equation (PDE) of contaminant transport in porous media, was employed to estimate the CC values at any point within the study area where there was no TP or piezometer. Optimal values of the dispersion coefficient in the advection-dispersion PDE and shape coefficient of MQ-RBF were determined using the imperialist competitive algorithm. In temporal contaminant transport modeling, de-noised data enhanced the performance of ANN and ANFIS methods in terms of the determination coefficient, up to 6 and 5%, respectively, in the experimental study and up to 39 and 18%, respectively, in the field study. Results showed that the efficiency of ANFIS-meshless model was more than ANN-meshless model up to 2 and 13% in the experimental and field studies, respectively.

  3. Model-as-a-service (MaaS) using the cloud service innovation platform (CSIP)

    USDA-ARS?s Scientific Manuscript database

    Cloud infrastructures for modelling activities such as data processing, performing environmental simulations, or conducting model calibrations/optimizations provide a cost effective alternative to traditional high performance computing approaches. Cloud-based modelling examples emerged into the more...

  4. An Expanded Multi-scale Monte Carlo Simulation Method for Personalized Radiobiological Effect Estimation in Radiotherapy: a feasibility study

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Feng, Yuanming; Wang, Wei; Yang, Chengwen; Wang, Ping

    2017-03-01

    A novel and versatile “bottom-up” approach is developed to estimate the radiobiological effect of clinic radiotherapy. The model consists of multi-scale Monte Carlo simulations from organ to cell levels. At cellular level, accumulated damages are computed using a spectrum-based accumulation algorithm and predefined cellular damage database. The damage repair mechanism is modeled by an expanded reaction-rate two-lesion kinetic model, which were calibrated through replicating a radiobiological experiment. Multi-scale modeling is then performed on a lung cancer patient under conventional fractionated irradiation. The cell killing effects of two representative voxels (isocenter and peripheral voxel of the tumor) are computed and compared. At microscopic level, the nucleus dose and damage yields vary among all nucleuses within the voxels. Slightly larger percentage of cDSB yield is observed for the peripheral voxel (55.0%) compared to the isocenter one (52.5%). For isocenter voxel, survival fraction increase monotonically at reduced oxygen environment. Under an extreme anoxic condition (0.001%), survival fraction is calculated to be 80% and the hypoxia reduction factor reaches a maximum value of 2.24. In conclusion, with biological-related variations, the proposed multi-scale approach is more versatile than the existing approaches for evaluating personalized radiobiological effects in radiotherapy.

  5. Comparison of Statistical Approaches for Dealing With Immortal Time Bias in Drug Effectiveness Studies

    PubMed Central

    Karim, Mohammad Ehsanul; Gustafson, Paul; Petkau, John; Tremlett, Helen

    2016-01-01

    In time-to-event analyses of observational studies of drug effectiveness, incorrect handling of the period between cohort entry and first treatment exposure during follow-up may result in immortal time bias. This bias can be eliminated by acknowledging a change in treatment exposure status with time-dependent analyses, such as fitting a time-dependent Cox model. The prescription time-distribution matching (PTDM) method has been proposed as a simpler approach for controlling immortal time bias. Using simulation studies and theoretical quantification of bias, we compared the performance of the PTDM approach with that of the time-dependent Cox model in the presence of immortal time. Both assessments revealed that the PTDM approach did not adequately address immortal time bias. Based on our simulation results, another recently proposed observational data analysis technique, the sequential Cox approach, was found to be more useful than the PTDM approach (Cox: bias = −0.002, mean squared error = 0.025; PTDM: bias = −1.411, mean squared error = 2.011). We applied these approaches to investigate the association of β-interferon treatment with delaying disability progression in a multiple sclerosis cohort in British Columbia, Canada (Long-Term Benefits and Adverse Effects of Beta-Interferon for Multiple Sclerosis (BeAMS) Study, 1995–2008). PMID:27455963

  6. Meta‐analysis using individual participant data: one‐stage and two‐stage approaches, and why they may differ

    PubMed Central

    Ensor, Joie; Riley, Richard D.

    2016-01-01

    Meta‐analysis using individual participant data (IPD) obtains and synthesises the raw, participant‐level data from a set of relevant studies. The IPD approach is becoming an increasingly popular tool as an alternative to traditional aggregate data meta‐analysis, especially as it avoids reliance on published results and provides an opportunity to investigate individual‐level interactions, such as treatment‐effect modifiers. There are two statistical approaches for conducting an IPD meta‐analysis: one‐stage and two‐stage. The one‐stage approach analyses the IPD from all studies simultaneously, for example, in a hierarchical regression model with random effects. The two‐stage approach derives aggregate data (such as effect estimates) in each study separately and then combines these in a traditional meta‐analysis model. There have been numerous comparisons of the one‐stage and two‐stage approaches via theoretical consideration, simulation and empirical examples, yet there remains confusion regarding when each approach should be adopted, and indeed why they may differ. In this tutorial paper, we outline the key statistical methods for one‐stage and two‐stage IPD meta‐analyses, and provide 10 key reasons why they may produce different summary results. We explain that most differences arise because of different modelling assumptions, rather than the choice of one‐stage or two‐stage itself. We illustrate the concepts with recently published IPD meta‐analyses, summarise key statistical software and provide recommendations for future IPD meta‐analyses. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:27747915

  7. A comparison of two coaching approaches to enhance implementation of a recovery-oriented service model.

    PubMed

    Deane, Frank P; Andresen, Retta; Crowe, Trevor P; Oades, Lindsay G; Ciarrochi, Joseph; Williams, Virginia

    2014-09-01

    Moving to recovery-oriented service provision in mental health may entail retraining existing staff, as well as training new staff. This represents a substantial burden on organisations, particularly since transfer of training into practice is often poor. Follow-up supervision and/or coaching have been found to improve the implementation and sustainment of new approaches. We compared the effect of two coaching conditions, skills-based and transformational coaching, on the implementation of a recovery-oriented model following training. Training followed by coaching led to significant sustained improvements in the quality of care planning in accordance with the new model over the 12-month study period. No interaction effect was observed between the two conditions. However, post hoc analyses suggest that transformational coaching warrants further exploration. The results support the provision of supervision in the form of coaching in the implementation of a recovery-oriented service model, and suggest the need to better elucidate the mechanisms within different coaching approaches that might contribute to improved care.

  8. Assessing Potential Climate Change Effects on Loblolly Pine Growth: A Probabilistic Regional Modeling Approach

    Treesearch

    Peter B. Woodbury; James E. Smith; David A. Weinstein; John A. Laurence

    1998-01-01

    Most models of the potential effects of climate change on forest growth have produced deterministic predictions. However, there are large uncertainties in data on regional forest condition, estimates of future climate, and quantitative relationships between environmental conditions and forest growth rate. We constructed a new model to analyze these uncertainties...

  9. Boosting structured additive quantile regression for longitudinal childhood obesity data.

    PubMed

    Fenske, Nora; Fahrmeir, Ludwig; Hothorn, Torsten; Rzehak, Peter; Höhle, Michael

    2013-07-25

    Childhood obesity and the investigation of its risk factors has become an important public health issue. Our work is based on and motivated by a German longitudinal study including 2,226 children with up to ten measurements on their body mass index (BMI) and risk factors from birth to the age of 10 years. We introduce boosting of structured additive quantile regression as a novel distribution-free approach for longitudinal quantile regression. The quantile-specific predictors of our model include conventional linear population effects, smooth nonlinear functional effects, varying-coefficient terms, and individual-specific effects, such as intercepts and slopes. Estimation is based on boosting, a computer intensive inference method for highly complex models. We propose a component-wise functional gradient descent boosting algorithm that allows for penalized estimation of the large variety of different effects, particularly leading to individual-specific effects shrunken toward zero. This concept allows us to flexibly estimate the nonlinear age curves of upper quantiles of the BMI distribution, both on population and on individual-specific level, adjusted for further risk factors and to detect age-varying effects of categorical risk factors. Our model approach can be regarded as the quantile regression analog of Gaussian additive mixed models (or structured additive mean regression models), and we compare both model classes with respect to our obesity data.

  10. Inferring species interactions through joint mark–recapture analysis

    USGS Publications Warehouse

    Yackulic, Charles B.; Korman, Josh; Yard, Michael D.; Dzul, Maria C.

    2018-01-01

    Introduced species are frequently implicated in declines of native species. In many cases, however, evidence linking introduced species to native declines is weak. Failure to make strong inferences regarding the role of introduced species can hamper attempts to predict population viability and delay effective management responses. For many species, mark–recapture analysis is the more rigorous form of demographic analysis. However, to our knowledge, there are no mark–recapture models that allow for joint modeling of interacting species. Here, we introduce a two‐species mark–recapture population model in which the vital rates (and capture probabilities) of one species are allowed to vary in response to the abundance of the other species. We use a simulation study to explore bias and choose an approach to model selection. We then use the model to investigate species interactions between endangered humpback chub (Gila cypha) and introduced rainbow trout (Oncorhynchus mykiss) in the Colorado River between 2009 and 2016. In particular, we test hypotheses about how two environmental factors (turbidity and temperature), intraspecific density dependence, and rainbow trout abundance are related to survival, growth, and capture of juvenile humpback chub. We also project the long‐term effects of different rainbow trout abundances on adult humpback chub abundances. Our simulation study suggests this approach has minimal bias under potentially challenging circumstances (i.e., low capture probabilities) that characterized our application and that model selection using indicator variables could reliably identify the true generating model even when process error was high. When the model was applied to rainbow trout and humpback chub, we identified negative relationships between rainbow trout abundance and the survival, growth, and capture probability of juvenile humpback chub. Effects on interspecific interactions on survival and capture probability were strongly supported, whereas support for the growth effect was weaker. Environmental factors were also identified to be important and in many cases stronger than interspecific interactions, and there was still substantial unexplained variation in growth and survival rates. The general approach presented here for combining mark–recapture data for two species is applicable in many other systems and could be modified to model abundance of the invader via other modeling approaches.

  11. Gray-Box Approach for Thermal Modelling of Buildings for Applications in District Heating and Cooling Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saurav, Kumar; Chandan, Vikas

    District-heating-and-cooling (DHC) systems are a proven energy solution that has been deployed for many years in a growing number of urban areas worldwide. They comprise a variety of technologies that seek to develop synergies between the production and supply of heat, cooling, domestic hot water and electricity. Although the benefits of DHC systems are significant and have been widely acclaimed, yet the full potential of modern DHC systems remains largely untapped. There are several opportunities for development of energy efficient DHC systems, which will enable the effective exploitation of alternative renewable resources, waste heat recovery, etc., in order to increasemore » the overall efficiency and facilitate the transition towards the next generation of DHC systems. This motivated the need for modelling these complex systems. Large-scale modelling of DHC-networks is challenging, as it has several components such as buildings, pipes, valves, heating source, etc., interacting with each other. In this paper, we focus on building modelling. In particular, we present a gray-box methodology for thermal modelling of buildings. Gray-box modelling is a hybrid of data driven and physics based models where, coefficients of the equations from physics based models are learned using data. This approach allows us to capture the dynamics of the buildings more effectively as compared to pure data driven approach. Additionally, this approach results in a simpler models as compared to pure physics based models. We first develop the individual components of the building such as temperature evolution, flow controller, etc. These individual models are then integrated in to the complete gray-box model for the building. The model is validated using data collected from one of the buildings at Lule{\\aa}, a city on the coast of northern Sweden.« less

  12. Setting conservation management thresholds using a novel participatory modeling approach.

    PubMed

    Addison, P F E; de Bie, K; Rumpff, L

    2015-10-01

    We devised a participatory modeling approach for setting management thresholds that show when management intervention is required to address undesirable ecosystem changes. This approach was designed to be used when management thresholds: must be set for environmental indicators in the face of multiple competing objectives; need to incorporate scientific understanding and value judgments; and will be set by participants with limited modeling experience. We applied our approach to a case study where management thresholds were set for a mat-forming brown alga, Hormosira banksii, in a protected area management context. Participants, including management staff and scientists, were involved in a workshop to test the approach, and set management thresholds to address the threat of trampling by visitors to an intertidal rocky reef. The approach involved trading off the environmental objective, to maintain the condition of intertidal reef communities, with social and economic objectives to ensure management intervention was cost-effective. Ecological scenarios, developed using scenario planning, were a key feature that provided the foundation for where to set management thresholds. The scenarios developed represented declines in percent cover of H. banksii that may occur under increased threatening processes. Participants defined 4 discrete management alternatives to address the threat of trampling and estimated the effect of these alternatives on the objectives under each ecological scenario. A weighted additive model was used to aggregate participants' consequence estimates. Model outputs (decision scores) clearly expressed uncertainty, which can be considered by decision makers and used to inform where to set management thresholds. This approach encourages a proactive form of conservation, where management thresholds and associated actions are defined a priori for ecological indicators, rather than reacting to unexpected ecosystem changes in the future. © 2015 The Authors Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.

  13. Frequency Response Function Expansion for Unmeasured Translation and Rotation Dofs for Impedance Modelling Applications

    NASA Astrophysics Data System (ADS)

    Avitabile, P.; O'Callahan, J.

    2003-07-01

    Inclusion of rotational effects is critical for the accuracy of the predicted system characteristics, in almost all system modelling studies. However, experimentally derived information for the description of one or more of the components for the system will generally not have any rotational effects included in the description of the component. The lack of rotational effects has long affected the results from any system model development whether using a modal-based approach or an impedance-based approach. Several new expansion processes are described herein for the development of FRFs needed for impedance-based system models. These techniques expand experimentally derived mode shapes, residual modes from the modal parameter estimation process and FRFs directly to allow for the inclusion of the necessary rotational dof. The FRFs involving translational to rotational dofs are developed as well as the rotational to rotational dof. Examples are provided to show the use of these techniques.

  14. A Consideration on Service Business Model for Saving Energy and Reduction of CO2 Emissions Using Inverters

    NASA Astrophysics Data System (ADS)

    Kosaka, Michitaka; Yabutani, Takashi

    This paper considers the effectiveness of service business approach for reducing CO2 emission. “HDRIVE” is a service business using inverters to reduce energy consumption of motor drive. The business model of this service is changed for finding new opportunities of CO2 emission reduction by combining various factors such as financial service or long-term service contract. Risk analysis of this business model is very important for giving stable services to users for long term. HDRIVE business model is found to be suitable for this objective. This service can be applied to the industries such as chemical or steel industry effectively, where CO2 emission is very large, and has the possibility of creating new business considering CDM or trading CO2 emission right. The effectiveness of this approach is demonstrated through several examples in real business.

  15. Analytical modeling of eddy-current losses caused by pulse-width-modulation switching in permanent-magnet brushless direct-current motors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deng, F.; Nehl, T.W.

    1998-09-01

    Because of their high efficiency and power density the PM brushless dc motor is a strong candidate for electric and hybrid vehicle propulsion systems. An analytical approach is developed to predict the inverter high frequency pulse width modulation (PWM) switching caused eddy-current losses in a permanent magnet brushless dc motor. The model uses polar coordinates to take curvature effects into account, and is also capable of including the space harmonic effect of the stator magnetic field and the stator lamination effect on the losses. The model was applied to an existing motor design and was verified with the finite elementmore » method. Good agreement was achieved between the two approaches. Hence, the model is expected to be very helpful in predicting PWM switching losses in permanent magnet machine design.« less

  16. Modeling work zone crash frequency by quantifying measurement errors in work zone length.

    PubMed

    Yang, Hong; Ozbay, Kaan; Ozturk, Ozgur; Yildirimoglu, Mehmet

    2013-06-01

    Work zones are temporary traffic control zones that can potentially cause safety problems. Maintaining safety, while implementing necessary changes on roadways, is an important challenge traffic engineers and researchers have to confront. In this study, the risk factors in work zone safety evaluation were identified through the estimation of a crash frequency (CF) model. Measurement errors in explanatory variables of a CF model can lead to unreliable estimates of certain parameters. Among these, work zone length raises a major concern in this analysis because it may change as the construction schedule progresses generally without being properly documented. This paper proposes an improved modeling and estimation approach that involves the use of a measurement error (ME) model integrated with the traditional negative binomial (NB) model. The proposed approach was compared with the traditional NB approach. Both models were estimated using a large dataset that consists of 60 work zones in New Jersey. Results showed that the proposed improved approach outperformed the traditional approach in terms of goodness-of-fit statistics. Moreover it is shown that the use of the traditional NB approach in this context can lead to the overestimation of the effect of work zone length on the crash occurrence. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Predicting change in epistemological beliefs, reflective thinking and learning styles: a longitudinal study.

    PubMed

    Phan, Huy P

    2008-03-01

    Although extensive research has examined epistemological beliefs, reflective thinking and learning approaches, very few studies have looked at these three theoretical frameworks in their totality. This research tested two separate structural models of epistemological beliefs, learning approaches, reflective thinking and academic performance among tertiary students over a period of 12 months. Participants were first-year Arts (N=616; 271 females, 345 males) and second-year Mathematics (N=581; 241 females, 341 males) university students. Students' epistemological beliefs were measured with the Schommer epistemological questionnaire (EQ, Schommer, 1990). Reflective thinking was measured with the reflective thinking questionnaire (RTQ, Kember et al., 2000). Student learning approaches were measured with the revised study process questionnaire (R-SPQ-2F, Biggs, Kember, & Leung, 2001). LISREL 8 was used to test two structural equation models - the cross-lag model and the causal-mediating model. In the cross-lag model involving Arts students, structural equation modelling showed that epistemological beliefs influenced student learning approaches rather than the contrary. In the causal-mediating model involving Mathematics students, the results indicate that both epistemological beliefs and learning approaches predicted reflective thinking and academic performance. Furthermore, learning approaches mediated the effect of epistemological beliefs on reflective thinking and academic performance. Results of this study are significant as they integrated the three theoretical frameworks within the one study.

  18. A self-cognizant dynamic system approach for prognostics and health management

    NASA Astrophysics Data System (ADS)

    Bai, Guangxing; Wang, Pingfeng; Hu, Chao

    2015-03-01

    Prognostics and health management (PHM) is an emerging engineering discipline that diagnoses and predicts how and when a system will degrade its performance and lose its partial or whole functionality. Due to the complexity and invisibility of rules and states of most dynamic systems, developing an effective approach to track evolving system states becomes a major challenge. This paper presents a new self-cognizant dynamic system (SCDS) approach that incorporates artificial intelligence into dynamic system modeling for PHM. A feed-forward neural network (FFNN) is selected to approximate a complex system response which is challenging task in general due to inaccessible system physics. The trained FFNN model is then embedded into a dual extended Kalman filter algorithm to track down system dynamics. A recursive computation technique used to update the FFNN model using online measurements is also derived. To validate the proposed SCDS approach, a battery dynamic system is considered as an experimental application. After modeling the battery system by a FFNN model and a state-space model, the state-of-charge (SoC) and state-of-health (SoH) are estimated by updating the FFNN model using the proposed approach. Experimental results suggest that the proposed approach improves the efficiency and accuracy for battery health management.

  19. Limited-angle effect compensation for respiratory binned cardiac SPECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qi, Wenyuan; Yang, Yongyi, E-mail: yy@ece.iit.edu; Wernick, Miles N.

    Purpose: In cardiac single photon emission computed tomography (SPECT), respiratory-binned study is used to combat the motion blur associated with respiratory motion. However, owing to the variability in respiratory patterns during data acquisition, the acquired data counts can vary significantly both among respiratory bins and among projection angles within individual bins. If not properly accounted for, such variation could lead to artifacts similar to limited-angle effect in image reconstruction. In this work, the authors aim to investigate several reconstruction strategies for compensating the limited-angle effect in respiratory binned data for the purpose of reducing the image artifacts. Methods: The authorsmore » first consider a model based correction approach, in which the variation in acquisition time is directly incorporated into the imaging model, such that the data statistics are accurately described among both the projection angles and respiratory bins. Afterward, the authors consider an approximation approach, in which the acquired data are rescaled to accommodate the variation in acquisition time among different projection angles while the imaging model is kept unchanged. In addition, the authors also consider the use of a smoothing prior in reconstruction for suppressing the artifacts associated with limited-angle effect. In our evaluation study, the authors first used Monte Carlo simulated imaging with 4D NCAT phantom wherein the ground truth is known for quantitative comparison. The authors evaluated the accuracy of the reconstructed myocardium using a number of metrics, including regional and overall accuracy of the myocardium, uniformity and spatial resolution of the left ventricle (LV) wall, and detectability of perfusion defect using a channelized Hotelling observer. As a preliminary demonstration, the authors also tested the different approaches on five sets of clinical acquisitions. Results: The quantitative evaluation results show that the three compensation methods could all, but to different extents, reduce the reconstruction artifacts over no compensation. In particular, the model based approach reduced the mean-squared-error of the reconstructed myocardium by as much as 40%. Compared to the approach of data rescaling, the model based approach further improved both the overall and regional accuracy of the myocardium; it also further improved the lesion detectability and the uniformity of the LV wall. When ML reconstruction was used, the model based approach was notably more effective for improving the LV wall; when MAP reconstruction was used, the smoothing prior could reduce the noise level and artifacts with little or no increase in bias, but at the cost of a slight resolution loss of the LV wall. The improvements in image quality by the different compensation methods were also observed in the clinical acquisitions. Conclusions: Compensating for the uneven distribution of acquisition time among both projection angles and respiratory bins can effectively reduce the limited-angle artifacts in respiratory-binned cardiac SPECT reconstruction. Direct incorporation of the time variation into the imaging model together with a smoothing prior in reconstruction can lead to the most improvement in the accuracy of the reconstructed myocardium.« less

  20. The effect of increasing strength and approach velocity on triple jump performance.

    PubMed

    Allen, Sam J; Yeadon, M R Fred; King, Mark A

    2016-12-08

    The triple jump is an athletic event comprising three phases in which the optimal phase ratio (the proportion of each phase to the total distance jumped) is unknown. This study used a planar whole body torque-driven computer simulation model of the ground contact parts of all three phases of the triple jump to investigate the effect of strength and approach velocity on optimal performance. The strength and approach velocity of the simulation model were each increased by up to 30% in 10% increments from baseline data collected from a national standard triple jumper. Increasing strength always resulted in an increased overall jump distance. Increasing approach velocity also typically resulted in an increased overall jump distance but there was a point past which increasing approach velocity without increasing strength did not lead to an increase in overall jump distance. Increasing both strength and approach velocity by 10%, 20%, and 30% led to roughly equivalent increases in overall jump distances. Distances ranged from 14.05m with baseline strength and approach velocity, up to 18.49m with 30% increases in both. Optimal phase ratios were either hop-dominated or balanced, and typically became more balanced when the strength of the model was increased by a greater percentage than its approach velocity. The range of triple jump distances that resulted from the optimisation process suggests that strength and approach velocity are of great importance for triple jump performance. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Inferring Fitness Effects from Time-Resolved Sequence Data with a Delay-Deterministic Model.

    PubMed

    Nené, Nuno R; Dunham, Alistair S; Illingworth, Christopher J R

    2018-05-01

    A common challenge arising from the observation of an evolutionary system over time is to infer the magnitude of selection acting upon a specific genetic variant, or variants, within the population. The inference of selection may be confounded by the effects of genetic drift in a system, leading to the development of inference procedures to account for these effects. However, recent work has suggested that deterministic models of evolution may be effective in capturing the effects of selection even under complex models of demography, suggesting the more general application of deterministic approaches to inference. Responding to this literature, we here note a case in which a deterministic model of evolution may give highly misleading inferences, resulting from the nondeterministic properties of mutation in a finite population. We propose an alternative approach that acts to correct for this error, and which we denote the delay-deterministic model. Applying our model to a simple evolutionary system, we demonstrate its performance in quantifying the extent of selection acting within that system. We further consider the application of our model to sequence data from an evolutionary experiment. We outline scenarios in which our model may produce improved results for the inference of selection, noting that such situations can be easily identified via the use of a regular deterministic model. Copyright © 2018 Nené et al.

  2. Individualizing drug dosage with longitudinal data.

    PubMed

    Zhu, Xiaolu; Qu, Annie

    2016-10-30

    We propose a two-step procedure to personalize drug dosage over time under the framework of a log-linear mixed-effect model. We model patients' heterogeneity using subject-specific random effects, which are treated as the realizations of an unspecified stochastic process. We extend the conditional quadratic inference function to estimate both fixed-effect coefficients and individual random effects on a longitudinal training data sample in the first step and propose an adaptive procedure to estimate new patients' random effects and provide dosage recommendations for new patients in the second step. An advantage of our approach is that we do not impose any distribution assumption on estimating random effects. Moreover, the new approach can accommodate more general time-varying covariates corresponding to random effects. We show in theory and numerical studies that the proposed method is more efficient compared with existing approaches, especially when covariates are time varying. In addition, a real data example of a clozapine study confirms that our two-step procedure leads to more accurate drug dosage recommendations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  3. Predictions of a Large Magnetocaloric Effect in Co- and Cr-Substituted Heusler Alloys Using First-Principles and Monte Carlo Approaches

    NASA Astrophysics Data System (ADS)

    Sokolovskiy, Vladimir V.; Buchelnikov, Vasiliy D.; Zagrebin, Mikhail A.; Grünebohm, Anna; Entel, Peter

    The effect of Co- and Cr-doping on magnetic and magnetocaloric poperties of Ni-Mn-(In, Ga, Sn, and Al) Heusler alloys has been theoretically studied by combining first principles with Monte Carlo approaches. The magnetic and magnetocaloric properties are obtained as a function of temperature and magnetic field using a mixed type of Potts and Blume-Emery-Griffiths model where the model parameters are obtained from ab initio calculations. The Monte Carlo calculations allowed to make predictions of a giant inverse magnetocaloric effect in partially new hypothetical magnetic Heusler alloys across the martensitic transformation.

  4. A Conceptual Framework for Predicting the Toxicity of Reactive Chemicals: Modeling Soft Electrophilicity

    EPA Science Inventory

    Although the literature is replete with QSAR models developed for many toxic effects caused by reversible chemical interactions, the development of QSARs for the toxic effects of reactive chemicals lacks a consistent approach. While limitations exit, an appropriate starting-point...

  5. Scaffolding Learning by Modelling: The Effects of Partially Worked-out Models

    ERIC Educational Resources Information Center

    Mulder, Yvonne G.; Bollen, Lars; de Jong, Ton; Lazonder, Ard W.

    2016-01-01

    Creating executable computer models is a potentially powerful approach to science learning. Learning by modelling is also challenging because students can easily get overwhelmed by the inherent complexities of the task. This study investigated whether offering partially worked-out models can facilitate students' modelling practices and promote…

  6. A comparison of three random effects approaches to analyze repeated bounded outcome scores with an application in a stroke revalidation study.

    PubMed

    Molas, Marek; Lesaffre, Emmanuel

    2008-12-30

    Discrete bounded outcome scores (BOS), i.e. discrete measurements that are restricted on a finite interval, often occur in practice. Examples are compliance measures, quality of life measures, etc. In this paper we examine three related random effects approaches to analyze longitudinal studies with a BOS as response: (1) a linear mixed effects (LM) model applied to a logistic transformed modified BOS; (2) a model assuming that the discrete BOS is a coarsened version of a latent random variable, which after a logistic-normal transformation, satisfies an LM model; and (3) a random effects probit model. We consider also the extension whereby the variability of the BOS is allowed to depend on covariates. The methods are contrasted using a simulation study and on a longitudinal project, which documents stroke rehabilitation in four European countries using measures of motor and functional recovery. Copyright 2008 John Wiley & Sons, Ltd.

  7. Post-partum blues among Korean mothers: a structural equation modelling approach.

    PubMed

    Chung, Sung Suk; Yoo, Il Young; Joung, Kyoung Hwa

    2013-08-01

    The objective of this study was to propose the post-partum blues (PPB) model and to estimate the effects of self-esteem, social support, antenatal depression, and stressful events during pregnancy on PPB. Data were collected from 249 women post-partum during their stay in the maternity units of three hospitals in Korea using a self-administered questionnaire. A structural equation modelling approach using the Analysis of Moments Structure program was used to identify the direct and indirect effects of the variables on PPB. The full model had a good fit and accounted for 70.3% of the variance of PPB. Antenatal depression and stressful events during pregnancy had strong direct effects on PPB. Household income showed indirect effects on PPB via self-esteem and antenatal depression. Social support indirectly affected PPB via self-esteem, antenatal depression, and stressful events during pregnancy. © 2012 The Authors; International Journal of Mental Health Nursing © 2012 Australian College of Mental Health Nurses Inc.

  8. A novel algorithm for the calculation of physical and biological irradiation quantities in scanned ion beam therapy: the beamlet superposition approach

    NASA Astrophysics Data System (ADS)

    Russo, G.; Attili, A.; Battistoni, G.; Bertrand, D.; Bourhaleb, F.; Cappucci, F.; Ciocca, M.; Mairani, A.; Milian, F. M.; Molinelli, S.; Morone, M. C.; Muraro, S.; Orts, T.; Patera, V.; Sala, P.; Schmitt, E.; Vivaldo, G.; Marchetto, F.

    2016-01-01

    The calculation algorithm of a modern treatment planning system for ion-beam radiotherapy should ideally be able to deal with different ion species (e.g. protons and carbon ions), to provide relative biological effectiveness (RBE) evaluations and to describe different beam lines. In this work we propose a new approach for ion irradiation outcomes computations, the beamlet superposition (BS) model, which satisfies these requirements. This model applies and extends the concepts of previous fluence-weighted pencil-beam algorithms to quantities of radiobiological interest other than dose, i.e. RBE- and LET-related quantities. It describes an ion beam through a beam-line specific, weighted superposition of universal beamlets. The universal physical and radiobiological irradiation effect of the beamlets on a representative set of water-like tissues is evaluated once, coupling the per-track information derived from FLUKA Monte Carlo simulations with the radiobiological effectiveness provided by the microdosimetric kinetic model and the local effect model. Thanks to an extension of the superposition concept, the beamlet irradiation action superposition is applicable for the evaluation of dose, RBE and LET distributions. The weight function for the beamlets superposition is derived from the beam phase space density at the patient entrance. A general beam model commissioning procedure is proposed, which has successfully been tested on the CNAO beam line. The BS model provides the evaluation of different irradiation quantities for different ions, the adaptability permitted by weight functions and the evaluation speed of analitical approaches. Benchmarking plans in simple geometries and clinical plans are shown to demonstrate the model capabilities.

  9. Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models

    PubMed Central

    Gelfand, Lois A.; MacKinnon, David P.; DeRubeis, Robert J.; Baraldi, Amanda N.

    2016-01-01

    Objective: Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. Method: We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. Results: AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome—underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. Conclusions: When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results. PMID:27065906

  10. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.

    PubMed

    Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin

    2015-02-01

    To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.

  11. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach

    PubMed Central

    Enns, Eva A.; Cipriano, Lauren E.; Simons, Cyrena T.; Kong, Chung Yin

    2014-01-01

    Background To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single “goodness-of-fit” (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. Methods We demonstrate the Pareto frontier approach in the calibration of two models: a simple, illustrative Markov model and a previously-published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to two possible weighted-sum GOF scoring systems, and compare the health economic conclusions arising from these different definitions of best-fitting. Results For the simple model, outcomes evaluated over the best-fitting input sets according to the two weighted-sum GOF schemes were virtually non-overlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95%CI: 72,500 – 87,600] vs. $139,700 [95%CI: 79,900 - 182,800] per QALY gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95%CI: 64,900 – 156,200] per QALY gained). The TAVR model yielded similar results. Conclusions Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. PMID:24799456

  12. Navier-Stokes Computations With One-Equation Turbulence Model for Flows Along Concave Wall Surfaces

    NASA Technical Reports Server (NTRS)

    Wang, Chi R.

    2005-01-01

    This report presents the use of a time-marching three-dimensional compressible Navier-Stokes equation numerical solver with a one-equation turbulence model to simulate the flow fields developed along concave wall surfaces without and with a downstream extension flat wall surface. The 3-D Navier- Stokes numerical solver came from the NASA Glenn-HT code. The one-equation turbulence model was derived from the Spalart and Allmaras model. The computational approach was first calibrated with the computations of the velocity and Reynolds shear stress profiles of a steady flat plate boundary layer flow. The computational approach was then used to simulate developing boundary layer flows along concave wall surfaces without and with a downstream extension wall. The author investigated the computational results of surface friction factors, near surface velocity components, near wall temperatures, and a turbulent shear stress component in terms of turbulence modeling, computational mesh configurations, inlet turbulence level, and time iteration step. The computational results were compared with existing measurements of skin friction factors, velocity components, and shear stresses of the developing boundary layer flows. With a fine computational mesh and a one-equation model, the computational approach could predict accurately the skin friction factors, near surface velocity and temperature, and shear stress within the flows. The computed velocity components and shear stresses also showed the vortices effect on the velocity variations over a concave wall. The computed eddy viscosities at the near wall locations were also compared with the results from a two equation turbulence modeling technique. The inlet turbulence length scale was found to have little effect on the eddy viscosities at locations near the concave wall surface. The eddy viscosities, from the one-equation and two-equation modeling, were comparable at most stream-wise stations. The present one-equation turbulence model is an effective approach for turbulence modeling in the near solid wall surface region of flow over a concave wall.

  13. Skew-t partially linear mixed-effects models for AIDS clinical studies.

    PubMed

    Lu, Tao

    2016-01-01

    We propose partially linear mixed-effects models with asymmetry and missingness to investigate the relationship between two biomarkers in clinical studies. The proposed models take into account irregular time effects commonly observed in clinical studies under a semiparametric model framework. In addition, commonly assumed symmetric distributions for model errors are substituted by asymmetric distribution to account for skewness. Further, informative missing data mechanism is accounted for. A Bayesian approach is developed to perform parameter estimation simultaneously. The proposed model and method are applied to an AIDS dataset and comparisons with alternative models are performed.

  14. A systematic review of cost-effectiveness modeling of pharmaceutical therapies in neuropathic pain: variation in practice, key challenges, and recommendations for the future.

    PubMed

    Critchlow, Simone; Hirst, Matthew; Akehurst, Ron; Phillips, Ceri; Philips, Zoe; Sullivan, Will; Dunlop, Will C N

    2017-02-01

    Complexities in the neuropathic-pain care pathway make the condition difficult to manage and difficult to capture in cost-effectiveness models. The aim of this study is to understand, through a systematic review of previous cost-effectiveness studies, some of the key strengths and limitations in data and modeling practices in neuropathic pain. Thus, the aim is to guide future research and practice to improve resource allocation decisions and encourage continued investment to find novel and effective treatments for patients with neuropathic pain. The search strategy was designed to identify peer-reviewed cost-effectiveness evaluations of non-surgical, pharmaceutical therapies for neuropathic pain published since January 2000, accessing five key databases. All identified publications were reviewed and screened according to pre-defined eligibility criteria. Data extraction was designed to reflect key data challenges and approaches to modeling in neuropathic pain and based on published guidelines. The search strategy identified 20 cost-effectiveness analyses meeting the inclusion criteria, of which 14 had original model structures. Cost-effectiveness modeling in neuropathic pain is established and increasing across multiple jurisdictions; however, amongst these studies, there is substantial variation in modeling approach, and there are common limitations. Capturing the effect of treatments upon health outcomes, particularly health-related quality-of-life, is challenging, and the health effects of multiple lines of ineffective treatment, common for patients with neuropathic pain, have not been consistently or robustly modeled. To improve future economic modeling in neuropathic pain, further research is suggested into the effect of multiple lines of treatment and treatment failure upon patient outcomes and subsequent treatment effectiveness; the impact of treatment-emergent adverse events upon patient outcomes; and consistent and appropriate pain measures to inform models. The authors further encourage transparent reporting of inputs used to inform cost-effectiveness models, with robust, comprehensive and clear uncertainty analysis and, where feasible, open-source modeling is encouraged.

  15. The impact of atmospheric deposition and climate on forest growth in Europe using two empirical modelling approaches

    NASA Astrophysics Data System (ADS)

    Dobbertin, M.; Solberg, S.; Laubhann, D.; Sterba, H.; Reinds, G. J.; de Vries, W.

    2009-04-01

    Most recent studies show increasing forest growth in central Europe, rather than a decline as was expected due to negative effects of air pollution. While nitrogen deposition, increasing temperature and change in forest management are discussed as possible causes, quantification of the various environmental factors has rarely been undertaken. In our study, we used data from several hundreds of intensive monitoring plots from the ICP Forests network in Europe, ranging from northern Finland to Spain and southern Italy. Five-year growth data for the period 1994-1999 were available from roughly 650 plots to examine the influence of environmental factors on forest growth. Evaluations focused on the influence of nitrogen, sulphur and acid deposition, temperature, precipitation and drought. Concerning the latter meteorological variables we used the deviation from the long-term (30 years) mean. The study included the main tree species common beech (Fagus sylvatica), sessile or pedunculate oak (Quercus petraea and Q. robur), Scots pine (Pinus sylvestris) and Norway spruce (Picea abies). Two very different approaches were used. In the first approach an individual tree-based regression model was applied (Laubhahn et al., 2009), while in the second approach a stand-based model was applied (Solberg et al., 2009). The individual tree-based model had measured basal area increment of each individual tree as a growth response variable and tree size (diameter at breast height), tree competition (basal area of larger trees and stand density index), site factors (e.g. soil C/N ratio, temperature), and environmental factors (e.g. temperature change compared to long-term average, nitrogen and sulphur deposition) as influencing parameters. In the stand-growth model, stem volume increment was used as the growth response variable, after filtering out the expected growth. Expected growth was modelled as a function of site productivity, stand age and a stand density index. Relative volume growth was then calculated as actual growth in % of expected growth. The site productivity was either taken from expert estimates or computed from for each species from three site index curves from northern, central and southern Europe. Requirements for plot selection were different for both methods, resulting in 382 plots selected for the tree-individual approach and 363 plots for the stand growth model approach. Using a mixed model approach, the individual tree-based models for all species showed a high goodness of fit with Pseudo-R2 between 0.33 and 0.44. Diameter at breast height and basal area of larger trees were highly influential variables in all models. Increasing temperature showed a positive effect on growth for all species except Norway spruce. Nitrogen deposition showed a positive impact on growth for all four species. This influence was significant with p < 0.05 for all species except common beech, where the effect was nearly significant (p = 0.077). An increase of 1 kg N ha-1 yr-1 corresponded to an increase in basal area increment between 1.20% and 1.49% depending on species. The stand-growth models explained between 18% and 40% of the variance in expected growth, mainly with a positive effect of site productivity and a negative effect of age. The various models and statistical approaches were fairly consistent, and indicated a fertilizing effect of nitrogen deposition on relative growth, with a slightly above 1 percent increase in volume increment per kg of nitrogen deposition per ha and year. This was most clear for spruce and pine, and most pronounced for plots having soil C/N ratios above 25 (i.e. low nitrogen availability). Also, we found a positive relationship between relative growth and summer temperature, i.e. May-August mean temperature deviation from the 1961-1990 means. Other influences were uncertain. Possibly, sulphur and acid deposition have effects on growth, but these effects are eventually outweighed by the positive effect of nitrogen deposition, because of co-linearity between these variables. Considering an average total stem carbon uptake for European forests near 1730 kg per hectare and year, the increase in growth in the individual tree-based models implied an estimated sequestration of approximately 21- 26 kg carbon per kg nitrogen deposition. Using the growth data and the relative stem growth predicted in the stand growth models, values for the various models ranged between 16 and 24 kg (mean 19 kg) carbon uptake per kg nitrogen deposition. Both approaches, although being very different and using a different set of plots and different methods to estimate the N induced carbon uptake in stem wood resulted in very similar results. In summary, our results indicate a clear fertilization effect of N deposition on European forests, mainly on sites with high C/N soil ratios. It is in line with approaches focused on the fate of N in forest ecosystems and with results of N fertilizer experiments but much smaller than had recently been reported in other field studies (De Vries et al., 2008). Increasing temperature was also found to have a positive influence on forest growth, but this effect seemed to be less clear. References: De Vries W., Solberg S., Dobbertin M., Sterba H., Laubhahn D., Reinds G.J., Nabuurs G.-J., Gundersen P. (2008) Ecologically implausible carbon response. Nature, 451, E1-E3. Laubhann, D., Sterba H., Reinds, G.J., de Vries, W. The impact of atmospheric deposition and climate on forest growth in European monitoring plots: An individual tree growth model. Forest Ecol. Manage. (2009) doi:10.1016/j.foreco.2008.09.050. Solberg, S., Dobbertin, M., Reinds, G.J., Lange, H., Andreassen, K., Garcia Fernandez, P., Hildingsson, A., de Vries, W. Analyses of the impact of changes in atmospheric deposition and climate on forest growth in European monitoring plots: A stand growth approach. For. Ecol. Manage. (2009) doi:10.1016/j.foreco.2008.09.057.

  16. A cost-effectiveness comparison of existing and Landsat-aided snow water content estimation systems

    NASA Technical Reports Server (NTRS)

    Sharp, J. M.; Thomas, R. W.

    1975-01-01

    This study describes how Landsat imagery can be cost-effectively employed to augment an operational hydrologic model. Attention is directed toward the estimation of snow water content, a major predictor variable in the volumetric runoff forecasting model presently used by the California Department of Water Resources. A stratified double sampling scheme is supplemented with qualitative and quantitative analyses of existing operations to develop a comparison between the existing and satellite-aided approaches to snow water content estimation. Results show a decided advantage for the Landsat-aided approach.

  17. An Enhanced Engineering Perspective of Global Climate Systems and Statistical Formulation of Terrestrial CO2 Exchanges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Yuanshun; Baek, Seung H.; Garcia-Diza, Alberto

    2012-01-01

    This paper designs a comprehensive approach based on the engineering machine/system concept, to model, analyze, and assess the level of CO2 exchange between the atmosphere and terrestrial ecosystems, which is an important factor in understanding changes in global climate. The focus of this article is on spatial patterns and on the correlation between levels of CO2 fluxes and a variety of influencing factors in eco-environments. The engineering/machine concept used is a system protocol that includes the sequential activities of design, test, observe, and model. This concept is applied to explicitly include various influencing factors and interactions associated with CO2 fluxes.more » To formulate effective models of a large and complex climate system, this article introduces a modeling technique that will be referred to as Stochastic Filtering Analysis of Variance (SFANOVA). The CO2 flux data observed from some sites of AmeriFlux are used to illustrate and validate the analysis, prediction and globalization capabilities of the proposed engineering approach and the SF-ANOVA technology. The SF-ANOVA modeling approach was compared to stepwise regression, ridge regression, and neural networks. The comparison indicated that the proposed approach is a valid and effective tool with similar accuracy and less complexity than the other procedures.« less

  18. Introducing Meta-models for a More Efficient Hazard Mitigation Strategy with Rockfall Protection Barriers

    NASA Astrophysics Data System (ADS)

    Toe, David; Mentani, Alessio; Govoni, Laura; Bourrier, Franck; Gottardi, Guido; Lambert, Stéphane

    2018-04-01

    The paper presents a new approach to assess the effecctiveness of rockfall protection barriers, accounting for the wide variety of impact conditions observed on natural sites. This approach makes use of meta-models, considering a widely used rockfall barrier type and was developed from on FE simulation results. Six input parameters relevant to the block impact conditions have been considered. Two meta-models were developed concerning the barrier capability either of stopping the block or in reducing its kinetic energy. The outcome of the parameters range on the meta-model accuracy has been also investigated. The results of the study reveal that the meta-models are effective in reproducing with accuracy the response of the barrier to any impact conditions, providing a formidable tool to support the design of these structures. Furthermore, allowing to accommodate the effects of the impact conditions on the prediction of the block-barrier interaction, the approach can be successfully used in combination with rockfall trajectory simulation tools to improve rockfall quantitative hazard assessment and optimise rockfall mitigation strategies.

  19. a System Dynamics Model to Study the Importance of Infrastructure Facilities on Quality of Primary Education System in Developing Countries

    NASA Astrophysics Data System (ADS)

    Pedamallu, Chandra Sekhar; Ozdamar, Linet; Weber, Gerhard-Wilhelm; Kropat, Erik

    2010-06-01

    The system dynamics approach is a holistic way of solving problems in real-time scenarios. This is a powerful methodology and computer simulation modeling technique for framing, analyzing, and discussing complex issues and problems. System dynamics modeling and simulation is often the background of a systemic thinking approach and has become a management and organizational development paradigm. This paper proposes a system dynamics approach for study the importance of infrastructure facilities on quality of primary education system in developing nations. The model is proposed to be built using the Cross Impact Analysis (CIA) method of relating entities and attributes relevant to the primary education system in any given community. We offer a survey to build the cross-impact correlation matrix and, hence, to better understand the primary education system and importance of infrastructural facilities on quality of primary education. The resulting model enables us to predict the effects of infrastructural facilities on the access of primary education by the community. This may support policy makers to take more effective actions in campaigns.

  20. Hybrid modeling in biochemical systems theory by means of functional petri nets.

    PubMed

    Wu, Jialiang; Voit, Eberhard

    2009-02-01

    Many biological systems are genuinely hybrids consisting of interacting discrete and continuous components and processes that often operate at different time scales. It is therefore desirable to create modeling frameworks capable of combining differently structured processes and permitting their analysis over multiple time horizons. During the past 40 years, Biochemical Systems Theory (BST) has been a very successful approach to elucidating metabolic, gene regulatory, and signaling systems. However, its foundation in ordinary differential equations has precluded BST from directly addressing problems containing switches, delays, and stochastic effects. In this study, we extend BST to hybrid modeling within the framework of Hybrid Functional Petri Nets (HFPN). First, we show how the canonical GMA and S-system models in BST can be directly implemented in a standard Petri Net framework. In a second step we demonstrate how to account for different types of time delays as well as for discrete, stochastic, and switching effects. Using representative test cases, we validate the hybrid modeling approach through comparative analyses and simulations with other approaches and highlight the feasibility, quality, and efficiency of the hybrid method.

  1. Service-based analysis of biological pathways

    PubMed Central

    Zheng, George; Bouguettaya, Athman

    2009-01-01

    Background Computer-based pathway discovery is concerned with two important objectives: pathway identification and analysis. Conventional mining and modeling approaches aimed at pathway discovery are often effective at achieving either objective, but not both. Such limitations can be effectively tackled leveraging a Web service-based modeling and mining approach. Results Inspired by molecular recognitions and drug discovery processes, we developed a Web service mining tool, named PathExplorer, to discover potentially interesting biological pathways linking service models of biological processes. The tool uses an innovative approach to identify useful pathways based on graph-based hints and service-based simulation verifying user's hypotheses. Conclusion Web service modeling of biological processes allows the easy access and invocation of these processes on the Web. Web service mining techniques described in this paper enable the discovery of biological pathways linking these process service models. Algorithms presented in this paper for automatically highlighting interesting subgraph within an identified pathway network enable the user to formulate hypothesis, which can be tested out using our simulation algorithm that are also described in this paper. PMID:19796403

  2. Application of Poisson random effect models for highway network screening.

    PubMed

    Jiang, Ximiao; Abdel-Aty, Mohamed; Alamili, Samer

    2014-02-01

    In recent years, Bayesian random effect models that account for the temporal and spatial correlations of crash data became popular in traffic safety research. This study employs random effect Poisson Log-Normal models for crash risk hotspot identification. Both the temporal and spatial correlations of crash data were considered. Potential for Safety Improvement (PSI) were adopted as a measure of the crash risk. Using the fatal and injury crashes that occurred on urban 4-lane divided arterials from 2006 to 2009 in the Central Florida area, the random effect approaches were compared to the traditional Empirical Bayesian (EB) method and the conventional Bayesian Poisson Log-Normal model. A series of method examination tests were conducted to evaluate the performance of different approaches. These tests include the previously developed site consistence test, method consistence test, total rank difference test, and the modified total score test, as well as the newly proposed total safety performance measure difference test. Results show that the Bayesian Poisson model accounting for both temporal and spatial random effects (PTSRE) outperforms the model that with only temporal random effect, and both are superior to the conventional Poisson Log-Normal model (PLN) and the EB model in the fitting of crash data. Additionally, the method evaluation tests indicate that the PTSRE model is significantly superior to the PLN model and the EB model in consistently identifying hotspots during successive time periods. The results suggest that the PTSRE model is a superior alternative for road site crash risk hotspot identification. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woods, Jason; Winkler, Jon

    Moisture adsorption and desorption in building materials impact indoor humidity. This effect should be included in building-energy simulations, particularly when humidity is being investigated or controlled. Several models can calculate this moisture-buffering effect, but accurate ones require model inputs that are not always known to the user of the building-energy simulation. This research developed an empirical method to extract whole-house model inputs for the effective moisture penetration depth (EMPD) model. The experimental approach was to subject the materials in the house to a square-wave relative-humidity profile, measure all of the moisture-transfer terms (e.g., infiltration, air-conditioner condensate), and calculate the onlymore » unmeasured term—the moisture sorption into the materials. We validated this method with laboratory measurements, which we used to measure the EMPD model inputs of two houses. After deriving these inputs, we measured the humidity of the same houses during tests with realistic latent and sensible loads and demonstrated the accuracy of this approach. Furthermore, these results show that the EMPD model, when given reasonable inputs, is an accurate moisture-buffering model.« less

  4. Pharmaceutical interventions for mitigating an influenza pandemic: modeling the risks and health-economic impacts.

    PubMed

    Postma, Maarten J; Milne, George; Nelson, E Anthony S; Pyenson, Bruce; Basili, Marcello; Coker, Richard; Oxford, John; Garrison, Louis P

    2010-12-01

    Model-based analyses built on burden-of-disease and cost-effectiveness theory predict that pharmaceutical interventions may efficiently mitigate both the epidemiologic and economic impact of an influenza pandemic. Pharmaceutical interventions typically encompass the application of (pre)pandemic influenza vaccines, other vaccines (notably pneumococcal), antiviral treatments and other drug treatment (e.g., antibiotics to target potential complications of influenza). However, these models may be too limited to capture the full macro-economic impact of pandemic influenza. The aim of this article is to summarize current health-economic modeling approaches to recognize the strengths and weaknesses of these approaches, and to compare these with more recently proposed alternative methods. We conclude that it is useful, particularly for policy and planning purposes, to extend modeling concepts through the application of alternative approaches, including insurers' risk theories, human capital approaches and sectoral and full macro-economic modeling. This article builds on a roundtable meeting of the Pandemic Influenza Economic Impact Group that was held in Boston, MA, USA, in December 2008.

  5. Comparing simple and complex approaches to simulate the impacts of soil water repellency on runoff and erosion in burnt Mediterranean forest slopes

    NASA Astrophysics Data System (ADS)

    Nunes, João Pedro; Catarina Simões Vieira, Diana; Keizer, Jan Jacob

    2017-04-01

    Fires impact soil hydrological properties, enhancing soil water repellency and therefore increasing the potential for surface runoff generation and soil erosion. In consequence, the successful application of hydrological models to post-fire conditions requires the appropriate simulation of the effects of soil water repellency on soil hydrology. This work compared three approaches to model soil water repellency impacts on soil hydrology in burnt eucalypt and pine forest slopes in central Portugal: 1) Daily approach, simulating repellency as a function of soil moisture, and influencing the maximum soil available water holding capacity. It is based on the Thornthwaite-Mather soil water modelling approach, and is parameterized with the soil's wilting point and field capacity, and a parameter relating soil water repellency with water holding capacity. It was tested with soil moisture data from burnt and unburnt hillslopes. This approach was able to simulate post-fire soil moisture patterns, which the model without repellency was unable to do. However, model parameters were different between the burnt and unburnt slopes, indicating that more research is needed to derive standardized parameters from commonly measured soil and vegetation properties. 2) Seasonal approach, pre-determining repellency at the seasonal scale (3 months) in four classes (from none to extreme). It is based on the Morgan-Morgan-Finney (MMF) runoff and erosion model, applied at the seasonal scale and is parameterized with a parameter relating repellency class with field capacity. It was tested with runoff and erosion data from several experimental plots, and led to important improvements on runoff prediction over an approach with constant field capacity for all seasons (calibrated for repellency effects), but only slight improvements in erosion predictions. In contrast with the daily approach, the parameters could be reproduced between different sites 3) Constant approach, specifying values for soil water repellency for the three years after the fire, and keeping them constant throughout the year. It is based on a daily Curve Number (CN) approach, and was incorporated directly in the Soil and Water Assessment Tool (SWAT) model and tested with erosion data from a burnt hillslope. This approach was able to successfully reproduce soil erosion. The results indicate that simplified approaches can be used to adapt existing models for post-fire simulation, taking repellency into account. Taking into account the seasonality of repellency seems more important to simulate surface runoff than erosion, possibly since simulating the larger runoff rates correctly is sufficient for erosion simulation. The constant approach can be applied directly in the parameterization of existing runoff and erosion models for soil loss and sediment yield prediction, while the seasonal approach can readily be developed as a next step, with further work being needed to assess if the approach and associated parameters can be applied in multiple post-fire environments.

  6. The influence of approach-avoidance motivational orientation on conflict adaptation.

    PubMed

    Hengstler, Maikel; Holland, Rob W; van Steenbergen, Henk; van Knippenberg, Ad

    2014-06-01

    To deal effectively with a continuously changing environment, our cognitive system adaptively regulates resource allocation. Earlier findings showed that an avoidance orientation (induced by arm extension), relative to an approach orientation (induced by arm flexion), enhanced sustained cognitive control. In avoidance conditions, performance on a cognitive control task was enhanced, as indicated by a reduced congruency effect, relative to approach conditions. Extending these findings, in the present behavioral studies we investigated dynamic adaptations in cognitive control-that is, conflict adaptation. We proposed that an avoidance state recruits more resources in response to conflicting signals, and thereby increases conflict adaptation. Conversely, in an approach state, conflict processing diminishes, which consequently weakens conflict adaptation. As predicted, approach versus avoidance arm movements affected both behavioral congruency effects and conflict adaptation: As compared to approach, avoidance movements elicited reduced congruency effects and increased conflict adaptation. These results are discussed in line with a possible underlying neuropsychological model.

  7. Efficient simulations of large-scale structure in modified gravity cosmologies with comoving Lagrangian acceleration

    NASA Astrophysics Data System (ADS)

    Valogiannis, Georgios; Bean, Rachel

    2017-05-01

    We implement an adaptation of the cola approach, a hybrid scheme that combines Lagrangian perturbation theory with an N-body approach, to model nonlinear collapse in chameleon and symmetron modified gravity models. Gravitational screening is modeled effectively through the attachment of a suppression factor to the linearized Klein-Gordon equations. The adapted cola approach is benchmarked, with respect to an N-body code both for the Λ cold dark matter (Λ CDM ) scenario and for the modified gravity theories. It is found to perform well in the estimation of the dark matter power spectra, with consistency of 1% to k ˜2.5 h /Mpc . Redshift space distortions are shown to be effectively modeled through a Lorentzian parametrization with a velocity dispersion fit to the data. We find that cola performs less well in predicting the halo mass functions but has consistency, within 1 σ uncertainties of our simulations, in the relative changes to the mass function induced by the modified gravity models relative to Λ CDM . The results demonstrate that cola, proposed to enable accurate and efficient, nonlinear predictions for Λ CDM , can be effectively applied to a wider set of cosmological scenarios, with intriguing properties, for which clustering behavior needs to be understood for upcoming surveys such as LSST, DESI, Euclid, and WFIRST.

  8. Summary of a Competency Based, Field Centered, Systems Approach to Elementary Teacher Education. Summary of the Final Report.

    ERIC Educational Resources Information Center

    Northwest Regional Educational Lab., Portland, OR.

    A competency-based, field-centered systems approach to elementary school teacher education was designed to bring about specified, measurable outcomes, to have evidence of its effectiveness continually available, and to be adaptive in the light of that evidence. The model was separated into two interdependent parts, the instructional model and the…

  9. Numerical approach to model independently reconstruct f (R ) functions through cosmographic data

    NASA Astrophysics Data System (ADS)

    Pizza, Liberato

    2015-06-01

    The challenging issue of determining the correct f (R ) among several possibilities is revised here by means of numerical reconstructions of the modified Friedmann equations around the redshift interval z ∈[0 ,1 ] . Frequently, a severe degeneracy between f (R ) approaches occurs, since different paradigms correctly explain present time dynamics. To set the initial conditions on the f (R ) functions, we involve the use of the so-called cosmography of the Universe, i.e., the technique of fixing constraints on the observable Universe by comparing expanded observables with current data. This powerful approach is essentially model independent, and correspondingly we got a model-independent reconstruction of f (R (z )) classes within the interval z ∈[0 ,1 ]. To allow the Hubble rate to evolve around z ≤1 , we considered three relevant frameworks of effective cosmological dynamics, i.e., the Λ CDM model, the Chevallier-Polarski-Linder parametrization, and a polynomial approach to dark energy. Finally, cumbersome algebra permits passing from f (z ) to f (R ), and the general outcome of our work is the determination of a viable f (R ) function, which effectively describes the observed Universe dynamics.

  10. Validation of Finite-Element Models of Persistent-Current Effects in Nb 3Sn Accelerator Magnets

    DOE PAGES

    Wang, X.; Ambrosio, G.; Chlachidze, G.; ...

    2015-01-06

    Persistent magnetization currents are induced in superconducting filaments during the current ramping in magnets. The resulting perturbation to the design magnetic field leads to field quality degradation, in particular at low field where the effect is stronger relative to the main field. The effects observed in NbTi accelerator magnets were reproduced well with the critical-state model. However, this approach becomes less accurate for the calculation of the persistent-current effects observed in Nb 3Sn accelerator magnets. Here a finite-element method based on the measured strand magnetization is validated against three state-of-art Nb3Sn accelerator magnets featuring different subelement diameters, critical currents, magnetmore » designs and measurement temperatures. The temperature dependence of the persistent-current effects is reproduced. Based on the validated model, the impact of conductor design on the persistent current effects is discussed. The performance, limitations and possible improvements of the approach are also discussed.« less

  11. The problem of deriving the field-induced thermal emission in Poole-Frenkel theories

    NASA Astrophysics Data System (ADS)

    Ongaro, R.; Pillonnet, A.

    1992-10-01

    A discussion is made of the legitimity of implementing the usual model of field-assisted release of electrons, over the lowered potential barrier of donors. It is stressed that no reliable interpretation can avail for the usual modelling of wells, on which Poole-Frenkel (PF) derivations are established. This is so because there does not seem to exist reliable ways of implanting a Coulomb potential well in the gap of a material. In an attempt to bridge the gap between the classical potential-energy approaches and the total-energy approach of Mahapatra and Roy, a Bohr-type model of wells is proposed. In addition, a brief review of quantum treatments of electronic transport in materials is presented, in order to see if more reliable ways of approaching PF effect can be derived on undisputable bases. Finally, it is concluded that, presently, PF effect can be established safely neither theoretically nor experimentally.

  12. Depth Reconstruction from Single Images Using a Convolutional Neural Network and a Condition Random Field Model.

    PubMed

    Liu, Dan; Liu, Xuejun; Wu, Yiguang

    2018-04-24

    This paper presents an effective approach for depth reconstruction from a single image through the incorporation of semantic information and local details from the image. A unified framework for depth acquisition is constructed by joining a deep Convolutional Neural Network (CNN) and a continuous pairwise Conditional Random Field (CRF) model. Semantic information and relative depth trends of local regions inside the image are integrated into the framework. A deep CNN network is firstly used to automatically learn a hierarchical feature representation of the image. To get more local details in the image, the relative depth trends of local regions are incorporated into the network. Combined with semantic information of the image, a continuous pairwise CRF is then established and is used as the loss function of the unified model. Experiments on real scenes demonstrate that the proposed approach is effective and that the approach obtains satisfactory results.

  13. A time domain frequency-selective multivariate Granger causality approach.

    PubMed

    Leistritz, Lutz; Witte, Herbert

    2016-08-01

    The investigation of effective connectivity is one of the major topics in computational neuroscience to understand the interaction between spatially distributed neuronal units of the brain. Thus, a wide variety of methods has been developed during the last decades to investigate functional and effective connectivity in multivariate systems. Their spectrum ranges from model-based to model-free approaches with a clear separation into time and frequency range methods. We present in this simulation study a novel time domain approach based on Granger's principle of predictability, which allows frequency-selective considerations of directed interactions. It is based on a comparison of prediction errors of multivariate autoregressive models fitted to systematically modified time series. These modifications are based on signal decompositions, which enable a targeted cancellation of specific signal components with specific spectral properties. Depending on the embedded signal decomposition method, a frequency-selective or data-driven signal-adaptive Granger Causality Index may be derived.

  14. Evaluating Individual Students' Perceptions of Instructional Quality: An Investigation of their Factor Structure, Measurement Invariance, and Relations to Educational Outcomes.

    PubMed

    Scherer, Ronny; Nilsen, Trude; Jansen, Malte

    2016-01-01

    Students' perceptions of instructional quality are among the most important criteria for evaluating teaching effectiveness. The present study evaluates different latent variable modeling approaches (confirmatory factor analysis, exploratory structural equation modeling, and bifactor modeling), which are used to describe these individual perceptions with respect to their factor structure, measurement invariance, and the relations to selected educational outcomes (achievement, self-concept, and motivation in mathematics). On the basis of the Programme for International Student Assessment (PISA) 2012 large-scale data sets of Australia, Canada, and the USA (N = 26,746 students), we find support for the distinction between three factors of individual students' perceptions and full measurement invariance across countries for all modeling approaches. In this regard, bifactor exploratory structural equation modeling outperformed alternative approaches with respect to model fit. Our findings reveal significant relations to the educational outcomes. This study synthesizes different modeling approaches of individual students' perceptions of instructional quality and provides insights into the nature of these perceptions from an individual differences perspective. Implications for the measurement and modeling of individually perceived instructional quality are discussed.

  15. A Robust Sound Source Localization Approach for Microphone Array with Model Errors

    NASA Astrophysics Data System (ADS)

    Xiao, Hua; Shao, Huai-Zong; Peng, Qi-Cong

    In this paper, a robust sound source localization approach is proposed. The approach retains good performance even when model errors exist. Compared with previous work in this field, the contributions of this paper are as follows. First, an improved broad-band and near-field array model is proposed. It takes array gain, phase perturbations into account and is based on the actual positions of the elements. It can be used in arbitrary planar geometry arrays. Second, a subspace model errors estimation algorithm and a Weighted 2-Dimension Multiple Signal Classification (W2D-MUSIC) algorithm are proposed. The subspace model errors estimation algorithm estimates unknown parameters of the array model, i. e., gain, phase perturbations, and positions of the elements, with high accuracy. The performance of this algorithm is improved with the increasing of SNR or number of snapshots. The W2D-MUSIC algorithm based on the improved array model is implemented to locate sound sources. These two algorithms compose the robust sound source approach. The more accurate steering vectors can be provided for further processing such as adaptive beamforming algorithm. Numerical examples confirm effectiveness of this proposed approach.

  16. Formalizing the Role of Agent-Based Modeling in Causal Inference and Epidemiology

    PubMed Central

    Marshall, Brandon D. L.; Galea, Sandro

    2015-01-01

    Calls for the adoption of complex systems approaches, including agent-based modeling, in the field of epidemiology have largely centered on the potential for such methods to examine complex disease etiologies, which are characterized by feedback behavior, interference, threshold dynamics, and multiple interacting causal effects. However, considerable theoretical and practical issues impede the capacity of agent-based methods to examine and evaluate causal effects and thus illuminate new areas for intervention. We build on this work by describing how agent-based models can be used to simulate counterfactual outcomes in the presence of complexity. We show that these models are of particular utility when the hypothesized causal mechanisms exhibit a high degree of interdependence between multiple causal effects and when interference (i.e., one person's exposure affects the outcome of others) is present and of intrinsic scientific interest. Although not without challenges, agent-based modeling (and complex systems methods broadly) represent a promising novel approach to identify and evaluate complex causal effects, and they are thus well suited to complement other modern epidemiologic methods of etiologic inquiry. PMID:25480821

  17. Synthesizing Technology Adoption and Learners' Approaches towards Active Learning in Higher Education

    ERIC Educational Resources Information Center

    Chan, Kevin; Cheung, George; Wan, Kelvin; Brown, Ian; Luk, Green

    2015-01-01

    In understanding how active and blended learning approaches with learning technologies engagement in undergraduate education, current research models tend to undermine the effect of learners' variations, particularly regarding their styles and approaches to learning, on intention and use of learning technologies. This study contributes to further…

  18. Estimating Causal Effects in Mediation Analysis Using Propensity Scores

    ERIC Educational Resources Information Center

    Coffman, Donna L.

    2011-01-01

    Mediation is usually assessed by a regression-based or structural equation modeling (SEM) approach that we refer to as the classical approach. This approach relies on the assumption that there are no confounders that influence both the mediator, "M", and the outcome, "Y". This assumption holds if individuals are randomly…

  19. A Synthesis of Equilibrium and Historical Models of Landform Development.

    ERIC Educational Resources Information Center

    Renwick, William H.

    1985-01-01

    The synthesis of two approaches that can be used in teaching geomorphology is described. The equilibrium approach explains landforms and landform change in terms of equilibrium between landforms and controlling processes. The historical approach draws on climatic geomorphology to describe the effects of Quaternary climatic and tectonic events on…

  20. Early Life Stress and Sleep Restriction as Risk Factors in PTSD: An Integrative Pre-Clinical Approach

    DTIC Science & Technology

    2014-04-01

    potential risk factors, with high relevance to soldiers. The primary aims of the project are thus. 1) To establish an effective animal model of PTSD that...develop the model as a platform for pharmacological testing of novel targets for drug development 5) As an additional aim – once an effective animal model...thus: 1) To establish an effective animal model of PTSD that would take into consideration the contribution of risk factors to the induction of the

Top