Sample records for model sensitivity analyses

  1. Linear regression metamodeling as a tool to summarize and present simulation model results.

    PubMed

    Jalal, Hawre; Dowd, Bryan; Sainfort, François; Kuntz, Karen M

    2013-10-01

    Modelers lack a tool to systematically and clearly present complex model results, including those from sensitivity analyses. The objective was to propose linear regression metamodeling as a tool to increase transparency of decision analytic models and better communicate their results. We used a simplified cancer cure model to demonstrate our approach. The model computed the lifetime cost and benefit of 3 treatment options for cancer patients. We simulated 10,000 cohorts in a probabilistic sensitivity analysis (PSA) and regressed the model outcomes on the standardized input parameter values in a set of regression analyses. We used the regression coefficients to describe measures of sensitivity analyses, including threshold and parameter sensitivity analyses. We also compared the results of the PSA to deterministic full-factorial and one-factor-at-a-time designs. The regression intercept represented the estimated base-case outcome, and the other coefficients described the relative parameter uncertainty in the model. We defined simple relationships that compute the average and incremental net benefit of each intervention. Metamodeling produced outputs similar to traditional deterministic 1-way or 2-way sensitivity analyses but was more reliable since it used all parameter values. Linear regression metamodeling is a simple, yet powerful, tool that can assist modelers in communicating model characteristics and sensitivity analyses.

  2. SVDS plume impingement modeling development. Sensitivity analysis supporting level B requirements

    NASA Technical Reports Server (NTRS)

    Chiu, P. B.; Pearson, D. J.; Muhm, P. M.; Schoonmaker, P. B.; Radar, R. J.

    1977-01-01

    A series of sensitivity analyses (trade studies) performed to select features and capabilities to be implemented in the plume impingement model is described. Sensitivity analyses were performed in study areas pertaining to geometry, flowfield, impingement, and dynamical effects. Recommendations based on these analyses are summarized.

  3. [Comparison of simple pooling and bivariate model used in meta-analyses of diagnostic test accuracy published in Chinese journals].

    PubMed

    Huang, Yuan-sheng; Yang, Zhi-rong; Zhan, Si-yan

    2015-06-18

    To investigate the use of simple pooling and bivariate model in meta-analyses of diagnostic test accuracy (DTA) published in Chinese journals (January to November, 2014), compare the differences of results from these two models, and explore the impact of between-study variability of sensitivity and specificity on the differences. DTA meta-analyses were searched through Chinese Biomedical Literature Database (January to November, 2014). Details in models and data for fourfold table were extracted. Descriptive analysis was conducted to investigate the prevalence of the use of simple pooling method and bivariate model in the included literature. Data were re-analyzed with the two models respectively. Differences in the results were examined by Wilcoxon signed rank test. How the results differences were affected by between-study variability of sensitivity and specificity, expressed by I2, was explored. The 55 systematic reviews, containing 58 DTA meta-analyses, were included and 25 DTA meta-analyses were eligible for re-analysis. Simple pooling was used in 50 (90.9%) systematic reviews and bivariate model in 1 (1.8%). The remaining 4 (7.3%) articles used other models pooling sensitivity and specificity or pooled neither of them. Of the reviews simply pooling sensitivity and specificity, 41(82.0%) were at the risk of wrongly using Meta-disc software. The differences in medians of sensitivity and specificity between two models were both 0.011 (P<0.001, P=0.031 respectively). Greater differences could be found as I2 of sensitivity or specificity became larger, especially when I2>75%. Most DTA meta-analyses published in Chinese journals(January to November, 2014) combine the sensitivity and specificity by simple pooling. Meta-disc software can pool the sensitivity and specificity only through fixed-effect model, but a high proportion of authors think it can implement random-effect model. Simple pooling tends to underestimate the results compared with bivariate model. The greater the between-study variance is, the more likely the simple pooling has larger deviation. It is necessary to increase the knowledge level of statistical methods and software for meta-analyses of DTA data.

  4. SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool

    PubMed Central

    Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda

    2008-01-01

    Background It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. Results This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. Conclusion SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes. PMID:18706080

  5. SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool.

    PubMed

    Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda

    2008-08-15

    It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes.

  6. Application of global sensitivity analysis methods to Takagi-Sugeno-Kang rainfall-runoff fuzzy models

    NASA Astrophysics Data System (ADS)

    Jacquin, A. P.; Shamseldin, A. Y.

    2009-04-01

    This study analyses the sensitivity of the parameters of Takagi-Sugeno-Kang rainfall-runoff fuzzy models previously developed by the authors. These models can be classified in two types, where the first type is intended to account for the effect of changes in catchment wetness and the second type incorporates seasonality as a source of non-linearity in the rainfall-runoff relationship. The sensitivity analysis is performed using two global sensitivity analysis methods, namely Regional Sensitivity Analysis (RSA) and Sobol's Variance Decomposition (SVD). In general, the RSA method has the disadvantage of not being able to detect sensitivities arising from parameter interactions. By contrast, the SVD method is suitable for analysing models where the model response surface is expected to be affected by interactions at a local scale and/or local optima, such as the case of the rainfall-runoff fuzzy models analysed in this study. The data of six catchments from different geographical locations and sizes are used in the sensitivity analysis. The sensitivity of the model parameters is analysed in terms of two measures of goodness of fit, assessing the model performance from different points of view. These measures are the Nash-Sutcliffe criterion and the index of volumetric fit. The results of the study show that the sensitivity of the model parameters depends on both the type of non-linear effects (i.e. changes in catchment wetness or seasonality) that dominates the catchment's rainfall-runoff relationship and the measure used to assess the model performance. Acknowledgements: This research was supported by FONDECYT, Research Grant 11070130. We would also like to express our gratitude to Prof. Kieran M. O'Connor from the National University of Ireland, Galway, for providing the data used in this study.

  7. A novel approach for modelling vegetation distributions and analysing vegetation sensitivity through trait-climate relationships in China

    PubMed Central

    Yang, Yanzheng; Zhu, Qiuan; Peng, Changhui; Wang, Han; Xue, Wei; Lin, Guanghui; Wen, Zhongming; Chang, Jie; Wang, Meng; Liu, Guobin; Li, Shiqing

    2016-01-01

    Increasing evidence indicates that current dynamic global vegetation models (DGVMs) have suffered from insufficient realism and are difficult to improve, particularly because they are built on plant functional type (PFT) schemes. Therefore, new approaches, such as plant trait-based methods, are urgently needed to replace PFT schemes when predicting the distribution of vegetation and investigating vegetation sensitivity. As an important direction towards constructing next-generation DGVMs based on plant functional traits, we propose a novel approach for modelling vegetation distributions and analysing vegetation sensitivity through trait-climate relationships in China. The results demonstrated that a Gaussian mixture model (GMM) trained with a LMA-Nmass-LAI data combination yielded an accuracy of 72.82% in simulating vegetation distribution, providing more detailed parameter information regarding community structures and ecosystem functions. The new approach also performed well in analyses of vegetation sensitivity to different climatic scenarios. Although the trait-climate relationship is not the only candidate useful for predicting vegetation distributions and analysing climatic sensitivity, it sheds new light on the development of next-generation trait-based DGVMs. PMID:27052108

  8. Univariate and bivariate likelihood-based meta-analysis methods performed comparably when marginal sensitivity and specificity were the targets of inference.

    PubMed

    Dahabreh, Issa J; Trikalinos, Thomas A; Lau, Joseph; Schmid, Christopher H

    2017-03-01

    To compare statistical methods for meta-analysis of sensitivity and specificity of medical tests (e.g., diagnostic or screening tests). We constructed a database of PubMed-indexed meta-analyses of test performance from which 2 × 2 tables for each included study could be extracted. We reanalyzed the data using univariate and bivariate random effects models fit with inverse variance and maximum likelihood methods. Analyses were performed using both normal and binomial likelihoods to describe within-study variability. The bivariate model using the binomial likelihood was also fit using a fully Bayesian approach. We use two worked examples-thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities-to highlight that different meta-analysis approaches can produce different results. We also present results from reanalysis of 308 meta-analyses of sensitivity and specificity. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5% or greater were observed in 12% and 5% of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated greater uncertainty around those estimates. Bivariate models produced imprecise estimates of the between-study correlation of sensitivity and specificity. Differences between methods were larger with increasing proportion of studies that were small or required a continuity correction. The binomial likelihood should be used to model within-study variability. Univariate and bivariate models give similar estimates of the marginal distributions for sensitivity and specificity. Bayesian methods fully quantify uncertainty and their ability to incorporate external evidence may be useful for imprecisely estimated parameters. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Space shuttle orbiter digital data processing system timing sensitivity analysis OFT ascent phase

    NASA Technical Reports Server (NTRS)

    Lagas, J. J.; Peterka, J. J.; Becker, D. A.

    1977-01-01

    Dynamic loads were investigated to provide simulation and analysis of the space shuttle orbiter digital data processing system (DDPS). Segments of the ascent test (OFT) configuration were modeled utilizing the information management system interpretive model (IMSIM) in a computerized simulation modeling of the OFT hardware and software workload. System requirements for simulation of the OFT configuration were defined, and sensitivity analyses determined areas of potential data flow problems in DDPS operation. Based on the defined system requirements and these sensitivity analyses, a test design was developed for adapting, parameterizing, and executing IMSIM, using varying load and stress conditions for model execution. Analyses of the computer simulation runs are documented, including results, conclusions, and recommendations for DDPS improvements.

  10. Inhomogeneous Forcing and Transient Climate Sensitivity

    NASA Technical Reports Server (NTRS)

    Shindell, Drew T.

    2014-01-01

    Understanding climate sensitivity is critical to projecting climate change in response to a given forcing scenario. Recent analyses have suggested that transient climate sensitivity is at the low end of the present model range taking into account the reduced warming rates during the past 10-15 years during which forcing has increased markedly. In contrast, comparisons of modelled feedback processes with observations indicate that the most realistic models have higher sensitivities. Here I analyse results from recent climate modelling intercomparison projects to demonstrate that transient climate sensitivity to historical aerosols and ozone is substantially greater than the transient climate sensitivity to CO2. This enhanced sensitivity is primarily caused by more of the forcing being located at Northern Hemisphere middle to high latitudes where it triggers more rapid land responses and stronger feedbacks. I find that accounting for this enhancement largely reconciles the two sets of results, and I conclude that the lowest end of the range of transient climate response to CO2 in present models and assessments (less than 1.3 C) is very unlikely.

  11. MOESHA: A genetic algorithm for automatic calibration and estimation of parameter uncertainty and sensitivity of hydrologic models

    EPA Science Inventory

    Characterization of uncertainty and sensitivity of model parameters is an essential and often overlooked facet of hydrological modeling. This paper introduces an algorithm called MOESHA that combines input parameter sensitivity analyses with a genetic algorithm calibration routin...

  12. Parameter sensitivity and identifiability for a biogeochemical model of hypoxia in the northern Gulf of Mexico

    EPA Science Inventory

    Local sensitivity analyses and identifiable parameter subsets were used to describe numerical constraints of a hypoxia model for bottom waters of the northern Gulf of Mexico. The sensitivity of state variables differed considerably with parameter changes, although most variables ...

  13. Ground water flow modeling with sensitivity analyses to guide field data collection in a mountain watershed

    USGS Publications Warehouse

    Johnson, Raymond H.

    2007-01-01

    In mountain watersheds, the increased demand for clean water resources has led to an increased need for an understanding of ground water flow in alpine settings. In Prospect Gulch, located in southwestern Colorado, understanding the ground water flow system is an important first step in addressing metal loads from acid-mine drainage and acid-rock drainage in an area with historical mining. Ground water flow modeling with sensitivity analyses are presented as a general tool to guide future field data collection, which is applicable to any ground water study, including mountain watersheds. For a series of conceptual models, the observation and sensitivity capabilities of MODFLOW-2000 are used to determine composite scaled sensitivities, dimensionless scaled sensitivities, and 1% scaled sensitivity maps of hydraulic head. These sensitivities determine the most important input parameter(s) along with the location of observation data that are most useful for future model calibration. The results are generally independent of the conceptual model and indicate recharge in a high-elevation recharge zone as the most important parameter, followed by the hydraulic conductivities in all layers and recharge in the next lower-elevation zone. The most important observation data in determining these parameters are hydraulic heads at high elevations, with a depth of less than 100 m being adequate. Evaluation of a possible geologic structure with a different hydraulic conductivity than the surrounding bedrock indicates that ground water discharge to individual stream reaches has the potential to identify some of these structures. Results of these sensitivity analyses can be used to prioritize data collection in an effort to reduce time and money spend by collecting the most relevant model calibration data.

  14. Sediment delivery modeling in practice: Comparing the effects of watershed characteristics and data resolution across hydroclimatic regions.

    PubMed

    Hamel, Perrine; Falinski, Kim; Sharp, Richard; Auerbach, Daniel A; Sánchez-Canales, María; Dennedy-Frank, P James

    2017-02-15

    Geospatial models are commonly used to quantify sediment contributions at the watershed scale. However, the sensitivity of these models to variation in hydrological and geomorphological features, in particular to land use and topography data, remains uncertain. Here, we assessed the performance of one such model, the InVEST sediment delivery model, for six sites comprising a total of 28 watersheds varying in area (6-13,500km 2 ), climate (tropical, subtropical, mediterranean), topography, and land use/land cover. For each site, we compared uncalibrated and calibrated model predictions with observations and alternative models. We then performed correlation analyses between model outputs and watershed characteristics, followed by sensitivity analyses on the digital elevation model (DEM) resolution. Model performance varied across sites (overall r 2 =0.47), but estimates of the magnitude of specific sediment export were as or more accurate than global models. We found significant correlations between metrics of sediment delivery and watershed characteristics, including erosivity, suggesting that empirical relationships may ultimately be developed for ungauged watersheds. Model sensitivity to DEM resolution varied across and within sites, but did not correlate with other observed watershed variables. These results were corroborated by sensitivity analyses performed on synthetic watersheds ranging in mean slope and DEM resolution. Our study provides modelers using InVEST or similar geospatial sediment models with practical insights into model behavior and structural uncertainty: first, comparison of model predictions across regions is possible when environmental conditions differ significantly; second, local knowledge on the sediment budget is needed for calibration; and third, model outputs often show significant sensitivity to DEM resolution. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Analysis of Sensitivity and Uncertainty in an Individual-Based Model of a Threatened Wildlife Species

    EPA Science Inventory

    We present a multi-faceted sensitivity analysis of a spatially explicit, individual-based model (IBM) (HexSim) of a threatened species, the Northern Spotted Owl (Strix occidentalis caurina) on a national forest in Washington, USA. Few sensitivity analyses have been conducted on ...

  16. Digital data processing system dynamic loading analysis

    NASA Technical Reports Server (NTRS)

    Lagas, J. J.; Peterka, J. J.; Tucker, A. E.

    1976-01-01

    Simulation and analysis of the Space Shuttle Orbiter Digital Data Processing System (DDPS) are reported. The mated flight and postseparation flight phases of the space shuttle's approach and landing test configuration were modeled utilizing the Information Management System Interpretative Model (IMSIM) in a computerized simulation modeling of the ALT hardware, software, and workload. System requirements simulated for the ALT configuration were defined. Sensitivity analyses determined areas of potential data flow problems in DDPS operation. Based on the defined system requirements and the sensitivity analyses, a test design is described for adapting, parameterizing, and executing the IMSIM. Varying load and stress conditions for the model execution are given. The analyses of the computer simulation runs were documented as results, conclusions, and recommendations for DDPS improvements.

  17. Beware the black box: investigating the sensitivity of FEA simulations to modelling factors in comparative biomechanics.

    PubMed

    Walmsley, Christopher W; McCurry, Matthew R; Clausen, Phillip D; McHenry, Colin R

    2013-01-01

    Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be 'reasonable' are often assumed to have little influence on the results and their interpretation. HERE WE REPORT AN EXTENSIVE SENSITIVITY ANALYSIS WHERE HIGH RESOLUTION FINITE ELEMENT (FE) MODELS OF MANDIBLES FROM SEVEN SPECIES OF CROCODILE WERE ANALYSED UNDER LOADS TYPICAL FOR COMPARATIVE ANALYSIS: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results.

  18. Beware the black box: investigating the sensitivity of FEA simulations to modelling factors in comparative biomechanics

    PubMed Central

    McCurry, Matthew R.; Clausen, Phillip D.; McHenry, Colin R.

    2013-01-01

    Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be ‘reasonable’ are often assumed to have little influence on the results and their interpretation. Here we report an extensive sensitivity analysis where high resolution finite element (FE) models of mandibles from seven species of crocodile were analysed under loads typical for comparative analysis: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results. PMID:24255817

  19. Sensitivity analysis as an aid in modelling and control of (poorly-defined) ecological systems. [closed ecological systems

    NASA Technical Reports Server (NTRS)

    Hornberger, G. M.; Rastetter, E. B.

    1982-01-01

    A literature review of the use of sensitivity analyses in modelling nonlinear, ill-defined systems, such as ecological interactions is presented. Discussions of previous work, and a proposed scheme for generalized sensitivity analysis applicable to ill-defined systems are included. This scheme considers classes of mathematical models, problem-defining behavior, analysis procedures (especially the use of Monte-Carlo methods), sensitivity ranking of parameters, and extension to control system design.

  20. Sampling and sensitivity analyses tools (SaSAT) for computational modelling

    PubMed Central

    Hoare, Alexander; Regan, David G; Wilson, David P

    2008-01-01

    SaSAT (Sampling and Sensitivity Analysis Tools) is a user-friendly software package for applying uncertainty and sensitivity analyses to mathematical and computational models of arbitrary complexity and context. The toolbox is built in Matlab®, a numerical mathematical software package, and utilises algorithms contained in the Matlab® Statistics Toolbox. However, Matlab® is not required to use SaSAT as the software package is provided as an executable file with all the necessary supplementary files. The SaSAT package is also designed to work seamlessly with Microsoft Excel but no functionality is forfeited if that software is not available. A comprehensive suite of tools is provided to enable the following tasks to be easily performed: efficient and equitable sampling of parameter space by various methodologies; calculation of correlation coefficients; regression analysis; factor prioritisation; and graphical output of results, including response surfaces, tornado plots, and scatterplots. Use of SaSAT is exemplified by application to a simple epidemic model. To our knowledge, a number of the methods available in SaSAT for performing sensitivity analyses have not previously been used in epidemiological modelling and their usefulness in this context is demonstrated. PMID:18304361

  1. DEVELOPMENT OF MESOSCALE AIR QUALITY SIMULATION MODELS. VOLUME 1. COMPARATIVE SENSITIVITY STUDIES OF PUFF, PLUME, AND GRID MODELS FOR LONG DISTANCE DISPERSION

    EPA Science Inventory

    This report provides detailed comparisons and sensitivity analyses of three candidate models, MESOPLUME, MESOPUFF, and MESOGRID. This was not a validation study; there was no suitable regional air quality data base for the Four Corners area. Rather, the models have been evaluated...

  2. Stochastic and deterministic models for agricultural production networks.

    PubMed

    Bai, P; Banks, H T; Dediu, S; Govan, A Y; Last, M; Lloyd, A L; Nguyen, H K; Olufsen, M S; Rempala, G; Slenning, B D

    2007-07-01

    An approach to modeling the impact of disturbances in an agricultural production network is presented. A stochastic model and its approximate deterministic model for averages over sample paths of the stochastic system are developed. Simulations, sensitivity and generalized sensitivity analyses are given. Finally, it is shown how diseases may be introduced into the network and corresponding simulations are discussed.

  3. An approach to measure parameter sensitivity in watershed hydrological modelling

    EPA Science Inventory

    Hydrologic responses vary spatially and temporally according to watershed characteristics. In this study, the hydrologic models that we developed earlier for the Little Miami River (LMR) and Las Vegas Wash (LVW) watersheds were used for detail sensitivity analyses. To compare the...

  4. The Evaluation of Bivariate Mixed Models in Meta-analyses of Diagnostic Accuracy Studies with SAS, Stata and R.

    PubMed

    Vogelgesang, Felicitas; Schlattmann, Peter; Dewey, Marc

    2018-05-01

    Meta-analyses require a thoroughly planned procedure to obtain unbiased overall estimates. From a statistical point of view not only model selection but also model implementation in the software affects the results. The present simulation study investigates the accuracy of different implementations of general and generalized bivariate mixed models in SAS (using proc mixed, proc glimmix and proc nlmixed), Stata (using gllamm, xtmelogit and midas) and R (using reitsma from package mada and glmer from package lme4). Both models incorporate the relationship between sensitivity and specificity - the two outcomes of interest in meta-analyses of diagnostic accuracy studies - utilizing random effects. Model performance is compared in nine meta-analytic scenarios reflecting the combination of three sizes for meta-analyses (89, 30 and 10 studies) with three pairs of sensitivity/specificity values (97%/87%; 85%/75%; 90%/93%). The evaluation of accuracy in terms of bias, standard error and mean squared error reveals that all implementations of the generalized bivariate model calculate sensitivity and specificity estimates with deviations less than two percentage points. proc mixed which together with reitsma implements the general bivariate mixed model proposed by Reitsma rather shows convergence problems. The random effect parameters are in general underestimated. This study shows that flexibility and simplicity of model specification together with convergence robustness should influence implementation recommendations, as the accuracy in terms of bias was acceptable in all implementations using the generalized approach. Schattauer GmbH.

  5. Local influence for generalized linear models with missing covariates.

    PubMed

    Shi, Xiaoyan; Zhu, Hongtu; Ibrahim, Joseph G

    2009-12-01

    In the analysis of missing data, sensitivity analyses are commonly used to check the sensitivity of the parameters of interest with respect to the missing data mechanism and other distributional and modeling assumptions. In this article, we formally develop a general local influence method to carry out sensitivity analyses of minor perturbations to generalized linear models in the presence of missing covariate data. We examine two types of perturbation schemes (the single-case and global perturbation schemes) for perturbing various assumptions in this setting. We show that the metric tensor of a perturbation manifold provides useful information for selecting an appropriate perturbation. We also develop several local influence measures to identify influential points and test model misspecification. Simulation studies are conducted to evaluate our methods, and real datasets are analyzed to illustrate the use of our local influence measures.

  6. Sensitivity analysis of static resistance of slender beam under bending

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valeš, Jan

    2016-06-08

    The paper deals with statical and sensitivity analyses of resistance of simply supported I-beams under bending. The resistance was solved by geometrically nonlinear finite element method in the programme Ansys. The beams are modelled with initial geometrical imperfections following the first eigenmode of buckling. Imperfections were, together with geometrical characteristics of cross section, and material characteristics of steel, considered as random quantities. The method Latin Hypercube Sampling was applied to evaluate statistical and sensitivity resistance analyses.

  7. Simulating smoke transport from wildland fires with a regional-scale air quality model: sensitivity to spatiotemporal allocation of fire emissions.

    PubMed

    Garcia-Menendez, Fernando; Hu, Yongtao; Odman, Mehmet T

    2014-09-15

    Air quality forecasts generated with chemical transport models can provide valuable information about the potential impacts of fires on pollutant levels. However, significant uncertainties are associated with fire-related emission estimates as well as their distribution on gridded modeling domains. In this study, we explore the sensitivity of fine particulate matter concentrations predicted by a regional-scale air quality model to the spatial and temporal allocation of fire emissions. The assessment was completed by simulating a fire-related smoke episode in which air quality throughout the Atlanta metropolitan area was affected on February 28, 2007. Sensitivity analyses were carried out to evaluate the significance of emission distribution among the model's vertical layers, along the horizontal plane, and into hourly inputs. Predicted PM2.5 concentrations were highly sensitive to emission injection altitude relative to planetary boundary layer height. Simulations were also responsive to the horizontal allocation of fire emissions and their distribution into single or multiple grid cells. Additionally, modeled concentrations were greatly sensitive to the temporal distribution of fire-related emissions. The analyses demonstrate that, in addition to adequate estimates of emitted mass, successfully modeling the impacts of fires on air quality depends on an accurate spatiotemporal allocation of emissions. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Cost-effectiveness of prucalopride in the treatment of chronic constipation in the Netherlands

    PubMed Central

    Nuijten, Mark J. C.; Dubois, Dominique J.; Joseph, Alain; Annemans, Lieven

    2015-01-01

    Objective: To assess the cost-effectiveness of prucalopride vs. continued laxative treatment for chronic constipation in patients in the Netherlands in whom laxatives have failed to provide adequate relief. Methods: A Markov model was developed to estimate the cost-effectiveness of prucalopride in patients with chronic constipation receiving standard laxative treatment from the perspective of Dutch payers in 2011. Data sources included published prucalopride clinical trials, published Dutch price/tariff lists, and national population statistics. The model simulated the clinical and economic outcomes associated with prucalopride vs. standard treatment and had a cycle length of 1 month and a follow-up time of 1 year. Response to treatment was defined as the proportion of patients who achieved “normal bowel function”. One-way and probabilistic sensitivity analyses were conducted to test the robustness of the base case. Results: In the base case analysis, the cost of prucalopride relative to continued laxative treatment was € 9015 per quality-adjusted life-year (QALY). Extensive sensitivity analyses and scenario analyses confirmed that the base case cost-effectiveness estimate was robust. One-way sensitivity analyses showed that the model was most sensitive in response to prucalopride; incremental cost-effectiveness ratios ranged from € 6475 to 15,380 per QALY. Probabilistic sensitivity analyses indicated that there is a greater than 80% probability that prucalopride would be cost-effective compared with continued standard treatment, assuming a willingness-to-pay threshold of € 20,000 per QALY from a Dutch societal perspective. A scenario analysis was performed for women only, which resulted in a cost-effectiveness ratio of € 7773 per QALY. Conclusion: Prucalopride was cost-effective in a Dutch patient population, as well as in a women-only subgroup, who had chronic constipation and who obtained inadequate relief from laxatives. PMID:25926794

  9. Low-order modelling of shallow water equations for sensitivity analysis using proper orthogonal decomposition

    NASA Astrophysics Data System (ADS)

    Zokagoa, Jean-Marie; Soulaïmani, Azzeddine

    2012-06-01

    This article presents a reduced-order model (ROM) of the shallow water equations (SWEs) for use in sensitivity analyses and Monte-Carlo type applications. Since, in the real world, some of the physical parameters and initial conditions embedded in free-surface flow problems are difficult to calibrate accurately in practice, the results from numerical hydraulic models are almost always corrupted with uncertainties. The main objective of this work is to derive a ROM that ensures appreciable accuracy and a considerable acceleration in the calculations so that it can be used as a surrogate model for stochastic and sensitivity analyses in real free-surface flow problems. The ROM is derived using the proper orthogonal decomposition (POD) method coupled with Galerkin projections of the SWEs, which are discretised through a finite-volume method. The main difficulty of deriving an efficient ROM is the treatment of the nonlinearities involved in SWEs. Suitable approximations that provide rapid online computations of the nonlinear terms are proposed. The proposed ROM is applied to the simulation of hypothetical flood flows in the Bordeaux breakwater, a portion of the 'Rivière des Prairies' located near Laval (a suburb of Montreal, Quebec). A series of sensitivity analyses are performed by varying the Manning roughness coefficient and the inflow discharge. The results are satisfactorily compared to those obtained by the full-order finite volume model.

  10. Structure of Preschool Phonological Sensitivity: Overlapping Sensitivity to Rhyme, Words, Syllables, and Phonemes.

    ERIC Educational Resources Information Center

    Anthony, Jason L.; Lonigan, Christopher J.; Burgess, Stephen R.; Driscoll, Kimberly; Phillips, Beth M.; Cantor, Brenlee G.

    2002-01-01

    This study examined relations among sensitivity to words, syllables, rhymes, and phonemes in older and younger preschoolers. Confirmatory factor analyses found that a one-factor model best explained the date from both groups of children. Only variance common to all phonological sensitivity skills was related to print knowledge and rudimentary…

  11. Parametric Sensitivity Analysis of Oscillatory Delay Systems with an Application to Gene Regulation.

    PubMed

    Ingalls, Brian; Mincheva, Maya; Roussel, Marc R

    2017-07-01

    A parametric sensitivity analysis for periodic solutions of delay-differential equations is developed. Because phase shifts cause the sensitivity coefficients of a periodic orbit to diverge, we focus on sensitivities of the extrema, from which amplitude sensitivities are computed, and of the period. Delay-differential equations are often used to model gene expression networks. In these models, the parametric sensitivities of a particular genotype define the local geometry of the evolutionary landscape. Thus, sensitivities can be used to investigate directions of gradual evolutionary change. An oscillatory protein synthesis model whose properties are modulated by RNA interference is used as an example. This model consists of a set of coupled delay-differential equations involving three delays. Sensitivity analyses are carried out at several operating points. Comments on the evolutionary implications of the results are offered.

  12. Report of the LSPI/NASA Workshop on Lunar Base Methodology Development

    NASA Technical Reports Server (NTRS)

    Nozette, Stewart; Roberts, Barney

    1985-01-01

    Groundwork was laid for computer models which will assist in the design of a manned lunar base. The models, herein described, will provide the following functions for the successful conclusion of that task: strategic planning; sensitivity analyses; impact analyses; and documentation. Topics addressed include: upper level model description; interrelationship matrix; user community; model features; model descriptions; system implementation; model management; and plans for future action.

  13. Bayesian model selection techniques as decision support for shaping a statistical analysis plan of a clinical trial: An example from a vertigo phase III study with longitudinal count data as primary endpoint

    PubMed Central

    2012-01-01

    Background A statistical analysis plan (SAP) is a critical link between how a clinical trial is conducted and the clinical study report. To secure objective study results, regulatory bodies expect that the SAP will meet requirements in pre-specifying inferential analyses and other important statistical techniques. To write a good SAP for model-based sensitivity and ancillary analyses involves non-trivial decisions on and justification of many aspects of the chosen setting. In particular, trials with longitudinal count data as primary endpoints pose challenges for model choice and model validation. In the random effects setting, frequentist strategies for model assessment and model diagnosis are complex and not easily implemented and have several limitations. Therefore, it is of interest to explore Bayesian alternatives which provide the needed decision support to finalize a SAP. Methods We focus on generalized linear mixed models (GLMMs) for the analysis of longitudinal count data. A series of distributions with over- and under-dispersion is considered. Additionally, the structure of the variance components is modified. We perform a simulation study to investigate the discriminatory power of Bayesian tools for model criticism in different scenarios derived from the model setting. We apply the findings to the data from an open clinical trial on vertigo attacks. These data are seen as pilot data for an ongoing phase III trial. To fit GLMMs we use a novel Bayesian computational approach based on integrated nested Laplace approximations (INLAs). The INLA methodology enables the direct computation of leave-one-out predictive distributions. These distributions are crucial for Bayesian model assessment. We evaluate competing GLMMs for longitudinal count data according to the deviance information criterion (DIC) or probability integral transform (PIT), and by using proper scoring rules (e.g. the logarithmic score). Results The instruments under study provide excellent tools for preparing decisions within the SAP in a transparent way when structuring the primary analysis, sensitivity or ancillary analyses, and specific analyses for secondary endpoints. The mean logarithmic score and DIC discriminate well between different model scenarios. It becomes obvious that the naive choice of a conventional random effects Poisson model is often inappropriate for real-life count data. The findings are used to specify an appropriate mixed model employed in the sensitivity analyses of an ongoing phase III trial. Conclusions The proposed Bayesian methods are not only appealing for inference but notably provide a sophisticated insight into different aspects of model performance, such as forecast verification or calibration checks, and can be applied within the model selection process. The mean of the logarithmic score is a robust tool for model ranking and is not sensitive to sample size. Therefore, these Bayesian model selection techniques offer helpful decision support for shaping sensitivity and ancillary analyses in a statistical analysis plan of a clinical trial with longitudinal count data as the primary endpoint. PMID:22962944

  14. Bayesian model selection techniques as decision support for shaping a statistical analysis plan of a clinical trial: an example from a vertigo phase III study with longitudinal count data as primary endpoint.

    PubMed

    Adrion, Christine; Mansmann, Ulrich

    2012-09-10

    A statistical analysis plan (SAP) is a critical link between how a clinical trial is conducted and the clinical study report. To secure objective study results, regulatory bodies expect that the SAP will meet requirements in pre-specifying inferential analyses and other important statistical techniques. To write a good SAP for model-based sensitivity and ancillary analyses involves non-trivial decisions on and justification of many aspects of the chosen setting. In particular, trials with longitudinal count data as primary endpoints pose challenges for model choice and model validation. In the random effects setting, frequentist strategies for model assessment and model diagnosis are complex and not easily implemented and have several limitations. Therefore, it is of interest to explore Bayesian alternatives which provide the needed decision support to finalize a SAP. We focus on generalized linear mixed models (GLMMs) for the analysis of longitudinal count data. A series of distributions with over- and under-dispersion is considered. Additionally, the structure of the variance components is modified. We perform a simulation study to investigate the discriminatory power of Bayesian tools for model criticism in different scenarios derived from the model setting. We apply the findings to the data from an open clinical trial on vertigo attacks. These data are seen as pilot data for an ongoing phase III trial. To fit GLMMs we use a novel Bayesian computational approach based on integrated nested Laplace approximations (INLAs). The INLA methodology enables the direct computation of leave-one-out predictive distributions. These distributions are crucial for Bayesian model assessment. We evaluate competing GLMMs for longitudinal count data according to the deviance information criterion (DIC) or probability integral transform (PIT), and by using proper scoring rules (e.g. the logarithmic score). The instruments under study provide excellent tools for preparing decisions within the SAP in a transparent way when structuring the primary analysis, sensitivity or ancillary analyses, and specific analyses for secondary endpoints. The mean logarithmic score and DIC discriminate well between different model scenarios. It becomes obvious that the naive choice of a conventional random effects Poisson model is often inappropriate for real-life count data. The findings are used to specify an appropriate mixed model employed in the sensitivity analyses of an ongoing phase III trial. The proposed Bayesian methods are not only appealing for inference but notably provide a sophisticated insight into different aspects of model performance, such as forecast verification or calibration checks, and can be applied within the model selection process. The mean of the logarithmic score is a robust tool for model ranking and is not sensitive to sample size. Therefore, these Bayesian model selection techniques offer helpful decision support for shaping sensitivity and ancillary analyses in a statistical analysis plan of a clinical trial with longitudinal count data as the primary endpoint.

  15. Subject-specific finite element modelling of the human foot complex during walking: sensitivity analysis of material properties, boundary and loading conditions.

    PubMed

    Akrami, Mohammad; Qian, Zhihui; Zou, Zhemin; Howard, David; Nester, Chris J; Ren, Lei

    2018-04-01

    The objective of this study was to develop and validate a subject-specific framework for modelling the human foot. This was achieved by integrating medical image-based finite element modelling, individualised multi-body musculoskeletal modelling and 3D gait measurements. A 3D ankle-foot finite element model comprising all major foot structures was constructed based on MRI of one individual. A multi-body musculoskeletal model and 3D gait measurements for the same subject were used to define loading and boundary conditions. Sensitivity analyses were used to investigate the effects of key modelling parameters on model predictions. Prediction errors of average and peak plantar pressures were below 10% in all ten plantar regions at five key gait events with only one exception (lateral heel, in early stance, error of 14.44%). The sensitivity analyses results suggest that predictions of peak plantar pressures are moderately sensitive to material properties, ground reaction forces and muscle forces, and significantly sensitive to foot orientation. The maximum region-specific percentage change ratios (peak stress percentage change over parameter percentage change) were 1.935-2.258 for ground reaction forces, 1.528-2.727 for plantar flexor muscles and 4.84-11.37 for foot orientations. This strongly suggests that loading and boundary conditions need to be very carefully defined based on personalised measurement data.

  16. Sensitivity Analysis in Sequential Decision Models.

    PubMed

    Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

    2017-02-01

    Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

  17. Probabilistic sensitivity analysis incorporating the bootstrap: an example comparing treatments for the eradication of Helicobacter pylori.

    PubMed

    Pasta, D J; Taylor, J L; Henning, J M

    1999-01-01

    Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.

  18. Advances in global sensitivity analyses of demographic-based species distribution models to address uncertainties in dynamic landscapes.

    PubMed

    Naujokaitis-Lewis, Ilona; Curtis, Janelle M R

    2016-01-01

    Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along with demographic parameters in sensitivity routines. GRIP 2.0 is an important decision-support tool that can be used to prioritize research, identify habitat-based thresholds and management intervention points to improve probability of species persistence, and evaluate trade-offs of alternative management options.

  19. Advances in global sensitivity analyses of demographic-based species distribution models to address uncertainties in dynamic landscapes

    PubMed Central

    Curtis, Janelle M.R.

    2016-01-01

    Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along with demographic parameters in sensitivity routines. GRIP 2.0 is an important decision-support tool that can be used to prioritize research, identify habitat-based thresholds and management intervention points to improve probability of species persistence, and evaluate trade-offs of alternative management options. PMID:27547529

  20. Preparation for implementation of the mechanistic-empirical pavement design guide in Michigan : part 2 - evaluation of rehabilitation fixes (part 2).

    DOT National Transportation Integrated Search

    2013-08-01

    The overall goal of Global Sensitivity Analysis (GSA) is to determine sensitivity of pavement performance prediction models to the variation in the design input values. The main difference between GSA and detailed sensitivity analyses is the way the ...

  1. iTOUGH2 v7.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    FINSTERLE, STEFAN; JUNG, YOOJIN; KOWALSKY, MICHAEL

    2016-09-15

    iTOUGH2 (inverse TOUGH2) provides inverse modeling capabilities for TOUGH2, a simulator for multi-dimensional, multi-phase, multi-component, non-isothermal flow and transport in fractured porous media. iTOUGH2 performs sensitivity analyses, data-worth analyses, parameter estimation, and uncertainty propagation analyses in geosciences and reservoir engineering and other application areas. iTOUGH2 supports a number of different combinations of fluids and components (equation-of-state (EOS) modules). In addition, the optimization routines implemented in iTOUGH2 can also be used for sensitivity analysis, automatic model calibration, and uncertainty quantification of any external code that uses text-based input and output files using the PEST protocol. iTOUGH2 solves the inverse problem bymore » minimizing a non-linear objective function of the weighted differences between model output and the corresponding observations. Multiple minimization algorithms (derivative-free, gradient-based, and second-order; local and global) are available. iTOUGH2 also performs Latin Hypercube Monte Carlo simulations for uncertainty propagation analyses. A detailed residual and error analysis is provided. This upgrade includes (a) global sensitivity analysis methods, (b) dynamic memory allocation (c) additional input features and output analyses, (d) increased forward simulation capabilities, (e) parallel execution on multicore PCs and Linux clusters, and (f) bug fixes. More details can be found at http://esd.lbl.gov/iTOUGH2.« less

  2. A Methodological Review of US Budget-Impact Models for New Drugs.

    PubMed

    Mauskopf, Josephine; Earnshaw, Stephanie

    2016-11-01

    A budget-impact analysis is required by many jurisdictions when adding a new drug to the formulary. However, previous reviews have indicated that adherence to methodological guidelines is variable. In this methodological review, we assess the extent to which US budget-impact analyses for new drugs use recommended practices. We describe recommended practice for seven key elements in the design of a budget-impact analysis. Targeted literature searches for US studies reporting estimates of the budget impact of a new drug were performed and we prepared a summary of how each study addressed the seven key elements. The primary finding from this review is that recommended practice is not followed in many budget-impact analyses. For example, we found that growth in the treated population size and/or changes in disease-related costs expected during the model time horizon for more effective treatments was not included in several analyses for chronic conditions. In addition, all drug-related costs were not captured in the majority of the models. Finally, for most studies, one-way sensitivity and scenario analyses were very limited, and the ranges used in one-way sensitivity analyses were frequently arbitrary percentages rather than being data driven. The conclusions from our review are that changes in population size, disease severity mix, and/or disease-related costs should be properly accounted for to avoid over- or underestimating the budget impact. Since each budget holder might have different perspectives and different values for many of the input parameters, it is also critical for published budget-impact analyses to include extensive sensitivity and scenario analyses based on realistic input values.

  3. Space Transfer Concepts and Analyses for Exploration Missions. Technical Directive 12: Beamed Power Systems Study

    NASA Technical Reports Server (NTRS)

    Eder, D.

    1992-01-01

    Parametric models were constructed for Earth-based laser powered electric orbit transfer from low Earth orbit to geosynchronous orbit. These models were used to carry out performance, cost/benefit, and sensitivity analyses of laser-powered transfer systems including end-to-end life cycle cost analyses for complete systems. Comparisons with conventional orbit transfer systems were made indicating large potential cost savings for laser-powered transfer. Approximate optimization was done to determine best parameter values for the systems. Orbit transfer flights simulations were conducted to explore effects of parameters not practical to model with a spreadsheet. The simulations considered view factors that determine when power can be transferred from ground stations to an orbit transfer vehicle and conducted sensitivity analyses for numbers of ground stations, Isp including dual-Isp transfers, and plane change profiles. Optimal steering laws were used for simultaneous altitude and plane change. Viewing geometry and low-thrust orbit raising were simultaneously simulated. A very preliminary investigation of relay mirrors was made.

  4. Computational Modelling and Optimal Control of Ebola Virus Disease with non-Linear Incidence Rate

    NASA Astrophysics Data System (ADS)

    Takaidza, I.; Makinde, O. D.; Okosun, O. K.

    2017-03-01

    The 2014 Ebola outbreak in West Africa has exposed the need to connect modellers and those with relevant data as pivotal to better understanding of how the disease spreads and quantifying the effects of possible interventions. In this paper, we model and analyse the Ebola virus disease with non-linear incidence rate. The epidemic model created is used to describe how the Ebola virus could potentially evolve in a population. We perform an uncertainty analysis of the basic reproductive number R 0 to quantify its sensitivity to other disease-related parameters. We also analyse the sensitivity of the final epidemic size to the time control interventions (education, vaccination, quarantine and safe handling) and provide the cost effective combination of the interventions.

  5. Sobol method application in dimensional sensitivity analyses of different AFM cantilevers for biological particles

    NASA Astrophysics Data System (ADS)

    Korayem, M. H.; Taheri, M.; Ghahnaviyeh, S. D.

    2015-08-01

    Due to the more delicate nature of biological micro/nanoparticles, it is necessary to compute the critical force of manipulation. The modeling and simulation of reactions and nanomanipulator dynamics in a precise manipulation process require an exact modeling of cantilevers stiffness, especially the stiffness of dagger cantilevers because the previous model is not useful for this investigation. The stiffness values for V-shaped cantilevers can be obtained through several methods. One of them is the PBA method. In another approach, the cantilever is divided into two sections: a triangular head section and two slanted rectangular beams. Then, deformations along different directions are computed and used to obtain the stiffness values in different directions. The stiffness formulations of dagger cantilever are needed for this sensitivity analyses so the formulations have been driven first and then sensitivity analyses has been started. In examining the stiffness of the dagger-shaped cantilever, the micro-beam has been divided into two triangular and rectangular sections and by computing the displacements along different directions and using the existing relations, the stiffness values for dagger cantilever have been obtained. In this paper, after investigating the stiffness of common types of cantilevers, Sobol sensitivity analyses of the effects of various geometric parameters on the stiffness of these types of cantilevers have been carried out. Also, the effects of different cantilevers on the dynamic behavior of nanoparticles have been studied and the dagger-shaped cantilever has been deemed more suitable for the manipulation of biological particles.

  6. Computational Aspects of Sensitivity Calculations in Linear Transient Structural Analysis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Greene, William H.

    1989-01-01

    A study has been performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semianalytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models.

  7. A structured framework for assessing sensitivity to missing data assumptions in longitudinal clinical trials.

    PubMed

    Mallinckrodt, C H; Lin, Q; Molenberghs, M

    2013-01-01

    The objective of this research was to demonstrate a framework for drawing inference from sensitivity analyses of incomplete longitudinal clinical trial data via a re-analysis of data from a confirmatory clinical trial in depression. A likelihood-based approach that assumed missing at random (MAR) was the primary analysis. Robustness to departure from MAR was assessed by comparing the primary result to those from a series of analyses that employed varying missing not at random (MNAR) assumptions (selection models, pattern mixture models and shared parameter models) and to MAR methods that used inclusive models. The key sensitivity analysis used multiple imputation assuming that after dropout the trajectory of drug-treated patients was that of placebo treated patients with a similar outcome history (placebo multiple imputation). This result was used as the worst reasonable case to define the lower limit of plausible values for the treatment contrast. The endpoint contrast from the primary analysis was - 2.79 (p = .013). In placebo multiple imputation, the result was - 2.17. Results from the other sensitivity analyses ranged from - 2.21 to - 3.87 and were symmetrically distributed around the primary result. Hence, no clear evidence of bias from missing not at random data was found. In the worst reasonable case scenario, the treatment effect was 80% of the magnitude of the primary result. Therefore, it was concluded that a treatment effect existed. The structured sensitivity framework of using a worst reasonable case result based on a controlled imputation approach with transparent and debatable assumptions supplemented a series of plausible alternative models under varying assumptions was useful in this specific situation and holds promise as a generally useful framework. Copyright © 2012 John Wiley & Sons, Ltd.

  8. Reduced size first-order subsonic and supersonic aeroelastic modeling

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay

    1990-01-01

    Various aeroelastic, aeroservoelastic, dynamic-response, and sensitivity analyses are based on a time-domain first-order (state-space) formulation of the equations of motion. The formulation of this paper is based on the minimum-state (MS) aerodynamic approximation method, which yields a low number of aerodynamic augmenting states. Modifications of the MS and the physical weighting procedures make the modeling method even more attractive. The flexibility of constraint selection is increased without increasing the approximation problem size; the accuracy of dynamic residualization of high-frequency modes is improved; and the resulting model is less sensitive to parametric changes in subsequent analyses. Applications to subsonic and supersonic cases demonstrate the generality, flexibility, accuracy, and efficiency of the method.

  9. Numerical modelling of distributed vibration sensor based on phase-sensitive OTDR

    NASA Astrophysics Data System (ADS)

    Masoudi, A.; Newson, T. P.

    2017-04-01

    A Distributed Vibration Sensor Based on Phase-Sensitive OTDR is numerically modeled. The advantage of modeling the building blocks of the sensor individually and combining the blocks to analyse the behavior of the sensing system is discussed. It is shown that the numerical model can accurately imitate the response of the experimental setup to dynamic perturbations a signal processing procedure similar to that used to extract the phase information from sensing setup.

  10. Classification of hydrological parameter sensitivity and evaluation of parameter transferability across 431 US MOPEX basins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Huiying; Hou, Zhangshuan; Huang, Maoyi

    The Community Land Model (CLM) represents physical, chemical, and biological processes of the terrestrial ecosystems that interact with climate across a range of spatial and temporal scales. As CLM includes numerous sub-models and associated parameters, the high-dimensional parameter space presents a formidable challenge for quantifying uncertainty and improving Earth system predictions needed to assess environmental changes and risks. This study aims to evaluate the potential of transferring hydrologic model parameters in CLM through sensitivity analyses and classification across watersheds from the Model Parameter Estimation Experiment (MOPEX) in the United States. The sensitivity of CLM-simulated water and energy fluxes to hydrologicalmore » parameters across 431 MOPEX basins are first examined using an efficient stochastic sampling-based sensitivity analysis approach. Linear, interaction, and high-order nonlinear impacts are all identified via statistical tests and stepwise backward removal parameter screening. The basins are then classified accordingly to their parameter sensitivity patterns (internal attributes), as well as their hydrologic indices/attributes (external hydrologic factors) separately, using a Principal component analyses (PCA) and expectation-maximization (EM) –based clustering approach. Similarities and differences among the parameter sensitivity-based classification system (S-Class), the hydrologic indices-based classification (H-Class), and the Koppen climate classification systems (K-Class) are discussed. Within each S-class with similar parameter sensitivity characteristics, similar inversion modeling setups can be used for parameter calibration, and the parameters and their contribution or significance to water and energy cycling may also be more transferrable. This classification study provides guidance on identifiable parameters, and on parameterization and inverse model design for CLM but the methodology is applicable to other models. Inverting parameters at representative sites belonging to the same class can significantly reduce parameter calibration efforts.« less

  11. Vibration isolation technology: Sensitivity of selected classes of space experiments to residual accelerations

    NASA Technical Reports Server (NTRS)

    Alexander, J. Iwan D.; Zhang, Y. Q.; Adebiyi, Adebimpe

    1989-01-01

    Progress performed on each task is described. Order of magnitude analyses related to liquid zone sensitivity and thermo-capillary flow sensitivity are covered. Progress with numerical models of the sensitivity of isothermal liquid zones is described. Progress towards a numerical model of coupled buoyancy-driven and thermo-capillary convection experiments is also described. Interaction with NASA personnel is covered. Results to date are summarized and they are discussed in terms of the predicted space station acceleration environment. Work planned for the second year is also discussed.

  12. Exploring sensitivity of a multistate occupancy model to inform management decisions

    USGS Publications Warehouse

    Green, A.W.; Bailey, L.L.; Nichols, J.D.

    2011-01-01

    Dynamic occupancy models are often used to investigate questions regarding the processes that influence patch occupancy and are prominent in the fields of population and community ecology and conservation biology. Recently, multistate occupancy models have been developed to investigate dynamic systems involving more than one occupied state, including reproductive states, relative abundance states and joint habitat-occupancy states. Here we investigate the sensitivities of the equilibrium-state distribution of multistate occupancy models to changes in transition rates. We develop equilibrium occupancy expressions and their associated sensitivity metrics for dynamic multistate occupancy models. To illustrate our approach, we use two examples that represent common multistate occupancy systems. The first example involves a three-state dynamic model involving occupied states with and without successful reproduction (California spotted owl Strix occidentalis occidentalis), and the second involves a novel way of using a multistate occupancy approach to accommodate second-order Markov processes (wood frog Lithobates sylvatica breeding and metamorphosis). In many ways, multistate sensitivity metrics behave in similar ways as standard occupancy sensitivities. When equilibrium occupancy rates are low, sensitivity to parameters related to colonisation is high, while sensitivity to persistence parameters is greater when equilibrium occupancy rates are high. Sensitivities can also provide guidance for managers when estimates of transition probabilities are not available. Synthesis and applications. Multistate models provide practitioners a flexible framework to define multiple, distinct occupied states and the ability to choose which state, or combination of states, is most relevant to questions and decisions about their own systems. In addition to standard multistate occupancy models, we provide an example of how a second-order Markov process can be modified to fit a multistate framework. Assuming the system is near equilibrium, our sensitivity analyses illustrate how to investigate the sensitivity of the system-specific equilibrium state(s) to changes in transition rates. Because management will typically act on these transition rates, sensitivity analyses can provide valuable information about the potential influence of different actions and when it may be prudent to shift the focus of management among the various transition rates. ?? 2011 The Authors. Journal of Applied Ecology ?? 2011 British Ecological Society.

  13. Balancing data sharing requirements for analyses with data sensitivity

    USGS Publications Warehouse

    Jarnevich, C.S.; Graham, J.J.; Newman, G.J.; Crall, A.W.; Stohlgren, T.J.

    2007-01-01

    Data sensitivity can pose a formidable barrier to data sharing. Knowledge of species current distributions from data sharing is critical for the creation of watch lists and an early warning/rapid response system and for model generation for the spread of invasive species. We have created an on-line system to synthesize disparate datasets of non-native species locations that includes a mechanism to account for data sensitivity. Data contributors are able to mark their data as sensitive. This data is then 'fuzzed' in mapping applications and downloaded files to quarter-quadrangle grid cells, but the actual locations are available for analyses. We propose that this system overcomes the hurdles to data sharing posed by sensitive data. ?? 2006 Springer Science+Business Media B.V.

  14. Preliminary performance assessment for the Waste Isolation Pilot Plant, December 1992. Volume 5, Uncertainty and sensitivity analyses of gas and brine migration for undisturbed performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1993-08-01

    Before disposing of transuranic radioactive waste in the Waste Isolation Pilot Plant (WIPP), the United States Department of Energy (DOE) must evaluate compliance with applicable long-term regulations of the United States Environmental Protection Agency (EPA). Sandia National Laboratories is conducting iterative performance assessments (PAs) of the WIPP for the DOE to provide interim guidance while preparing for a final compliance evaluation. This volume of the 1992 PA contains results of uncertainty and sensitivity analyses with respect to migration of gas and brine from the undisturbed repository. Additional information about the 1992 PA is provided in other volumes. Volume 1 containsmore » an overview of WIPP PA and results of a preliminary comparison with 40 CFR 191, Subpart B. Volume 2 describes the technical basis for the performance assessment, including descriptions of the linked computational models used in the Monte Carlo analyses. Volume 3 contains the reference data base and values for input parameters used in consequence and probability modeling. Volume 4 contains uncertainty and sensitivity analyses with respect to the EPA`s Environmental Standards for the Management and Disposal of Spent Nuclear Fuel, High-Level and Transuranic Radioactive Wastes (40 CFR 191, Subpart B). Finally, guidance derived from the entire 1992 PA is presented in Volume 6. Results of the 1992 uncertainty and sensitivity analyses indicate that, conditional on the modeling assumptions and the assigned parameter-value distributions, the most important parameters for which uncertainty has the potential to affect gas and brine migration from the undisturbed repository are: initial liquid saturation in the waste, anhydrite permeability, biodegradation-reaction stoichiometry, gas-generation rates for both corrosion and biodegradation under inundated conditions, and the permeability of the long-term shaft seal.« less

  15. Probabilistic performance-assessment modeling of the mixed waste landfill at Sandia National Laboratories.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peace, Gerald; Goering, Timothy James; Miller, Mark Laverne

    2007-01-01

    A probabilistic performance assessment has been conducted to evaluate the fate and transport of radionuclides (americium-241, cesium-137, cobalt-60, plutonium-238, plutonium-239, radium-226, radon-222, strontium-90, thorium-232, tritium, uranium-238), heavy metals (lead and cadmium), and volatile organic compounds (VOCs) at the Mixed Waste Landfill (MWL). Probabilistic analyses were performed to quantify uncertainties inherent in the system and models for a 1,000-year period, and sensitivity analyses were performed to identify parameters and processes that were most important to the simulated performance metrics. Comparisons between simulated results and measured values at the MWL were made to gain confidence in the models and perform calibrations whenmore » data were available. In addition, long-term monitoring requirements and triggers were recommended based on the results of the quantified uncertainty and sensitivity analyses.« less

  16. Spatiotemporal sensitivity analysis of vertical transport of pesticides in soil

    EPA Science Inventory

    Environmental fate and transport processes are influenced by many factors. Simulation models that mimic these processes often have complex implementations, which can lead to over-parameterization. Sensitivity analyses are subsequently used to identify critical parameters whose un...

  17. Impact of covariate models on the assessment of the air pollution-mortality association in a single- and multipollutant context.

    PubMed

    Sacks, Jason D; Ito, Kazuhiko; Wilson, William E; Neas, Lucas M

    2012-10-01

    With the advent of multicity studies, uniform statistical approaches have been developed to examine air pollution-mortality associations across cities. To assess the sensitivity of the air pollution-mortality association to different model specifications in a single and multipollutant context, the authors applied various regression models developed in previous multicity time-series studies of air pollution and mortality to data from Philadelphia, Pennsylvania (May 1992-September 1995). Single-pollutant analyses used daily cardiovascular mortality, fine particulate matter (particles with an aerodynamic diameter ≤2.5 µm; PM(2.5)), speciated PM(2.5), and gaseous pollutant data, while multipollutant analyses used source factors identified through principal component analysis. In single-pollutant analyses, risk estimates were relatively consistent across models for most PM(2.5) components and gaseous pollutants. However, risk estimates were inconsistent for ozone in all-year and warm-season analyses. Principal component analysis yielded factors with species associated with traffic, crustal material, residual oil, and coal. Risk estimates for these factors exhibited less sensitivity to alternative regression models compared with single-pollutant models. Factors associated with traffic and crustal material showed consistently positive associations in the warm season, while the coal combustion factor showed consistently positive associations in the cold season. Overall, mortality risk estimates examined using a source-oriented approach yielded more stable and precise risk estimates, compared with single-pollutant analyses.

  18. Importance analysis for Hudson River PCB transport and fate model parameters using robust sensitivity studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, S.; Toll, J.; Cothern, K.

    1995-12-31

    The authors have performed robust sensitivity studies of the physico-chemical Hudson River PCB model PCHEPM to identify the parameters and process uncertainties contributing the most to uncertainty in predictions of water column and sediment PCB concentrations, over the time period 1977--1991 in one segment of the lower Hudson River. The term ``robust sensitivity studies`` refers to the use of several sensitivity analysis techniques to obtain a more accurate depiction of the relative importance of different sources of uncertainty. Local sensitivity analysis provided data on the sensitivity of PCB concentration estimates to small perturbations in nominal parameter values. Range sensitivity analysismore » provided information about the magnitude of prediction uncertainty associated with each input uncertainty. Rank correlation analysis indicated which parameters had the most dominant influence on model predictions. Factorial analysis identified important interactions among model parameters. Finally, term analysis looked at the aggregate influence of combinations of parameters representing physico-chemical processes. The authors scored the results of the local and range sensitivity and rank correlation analyses. The authors considered parameters that scored high on two of the three analyses to be important contributors to PCB concentration prediction uncertainty, and treated them probabilistically in simulations. They also treated probabilistically parameters identified in the factorial analysis as interacting with important parameters. The authors used the term analysis to better understand how uncertain parameters were influencing the PCB concentration predictions. The importance analysis allowed us to reduce the number of parameters to be modeled probabilistically from 16 to 5. This reduced the computational complexity of Monte Carlo simulations, and more importantly, provided a more lucid depiction of prediction uncertainty and its causes.« less

  19. Constitutive Modeling of the Dynamic-Tensile-Extrusion Test of PTFE

    NASA Astrophysics Data System (ADS)

    Resnyansky, Anatoly; Brown, Eric; Trujillo, Carl; Gray, George

    2015-06-01

    Use of polymers in the defence, aerospace and industrial application at extreme conditions makes prediction of behaviour of these materials very important. Crucial to this is knowledge of the physical damage response in association with the phase transformations during the loading and the ability to predict this via multi-phase simulation taking the thermodynamical non-equilibrium and strain rate sensitivity into account. The current work analyses Dynamic-Tensile-Extrusion (DTE) experiments on polytetrafluoroethylene (PTFE). In particular, the phase transition during the loading with subsequent tension are analysed using a two-phase rate sensitive material model implemented in the CTH hydrocode and the calculations are compared with experimental high-speed photography. The damage patterns and their link with the change of loading modes are analysed numerically and are correlated to the test observations.

  20. Should cell-free DNA testing be used to target antenatal rhesus immune globulin administration?

    PubMed

    Ma, Kimberly K; Rodriguez, Maria I; Cheng, Yvonne W; Norton, Mary E; Caughey, Aaron B

    2016-01-01

    To compare the rates of alloimmunization with the use of cell-free DNA (cfDNA) screening to target antenatal rhesus immune globulin (RhIG) prenatally, versus routine administration of RhIG in rhesus D (RhD)-negative pregnant women in a theoretic cohort using a decision-analytic model. A decision-analytic model compared cfDNA testing to routine antenatal RhIG administration. The primary outcome was maternal sensitization to RhD antigen. Sensitivity and specificity of cfDNA testing were assumed to be 99.8% and 95.3%, respectively. Univariate and bivariate sensitivity analyses, Monte Carlo simulation, and threshold analyses were performed. In a cohort of 10,000 RhD-negative women, 22.6 sensitizations would occur with utilization of cfDNA, while 20 sensitizations would occur with routine RhIG. Only when the sensitivity of the cfDNA test reached 100%, the rate of sensitization was equal for both cfDNA and RhIG. Otherwise, routine RhIG minimized the rate of sensitization, especially given RhIG is readily available in the United States. Adoption of cfDNA testing would result in a 13.0% increase in sensitization among RhD-negative women in a theoretical cohort taking into account the ethnic diversity of the United States' population.

  1. Taxometric and Factor Analytic Models of Anxiety Sensitivity among Youth: Exploring the Latent Structure of Anxiety Psychopathology Vulnerability

    ERIC Educational Resources Information Center

    Bernstein, Amit; Zvolensky, Michael J.; Stewart, Sherry; Comeau, Nancy

    2007-01-01

    This study represents an effort to better understand the latent structure of anxiety sensitivity (AS), a well-established affect-sensitivity individual difference factor, among youth by employing taxometric and factor analytic approaches in an integrative manner. Taxometric analyses indicated that AS, as indexed by the Child Anxiety Sensitivity…

  2. Sensitivity analyses for simulating pesticide impacts on honey bee colonies

    EPA Science Inventory

    We employ Monte Carlo simulation and sensitivity analysis techniques to describe the population dynamics of pesticide exposure to a honey bee colony using the VarroaPop + Pesticide model. Simulations are performed of hive population trajectories with and without pesti...

  3. An approach to measure parameter sensitivity in watershed ...

    EPA Pesticide Factsheets

    Hydrologic responses vary spatially and temporally according to watershed characteristics. In this study, the hydrologic models that we developed earlier for the Little Miami River (LMR) and Las Vegas Wash (LVW) watersheds were used for detail sensitivity analyses. To compare the relative sensitivities of the hydrologic parameters of these two models, we used Normalized Root Mean Square Error (NRMSE). By combining the NRMSE index with the flow duration curve analysis, we derived an approach to measure parameter sensitivities under different flow regimes. Results show that the parameters related to groundwater are highly sensitive in the LMR watershed, whereas the LVW watershed is primarily sensitive to near surface and impervious parameters. The high and medium flows are more impacted by most of the parameters. Low flow regime was highly sensitive to groundwater related parameters. Moreover, our approach is found to be useful in facilitating model development and calibration. This journal article describes hydrological modeling of climate change and land use changes on stream hydrology, and elucidates the importance of hydrological model construction in generating valid modeling results.

  4. Sensitivity analysis of the space shuttle to ascent wind profiles

    NASA Technical Reports Server (NTRS)

    Smith, O. E.; Austin, L. D., Jr.

    1982-01-01

    A parametric sensitivity analysis of the space shuttle ascent flight to the wind profile is presented. Engineering systems parameters are obtained by flight simulations using wind profile models and samples of detailed (Jimsphere) wind profile measurements. The wind models used are the synthetic vector wind model, with and without the design gust, and a model of the vector wind change with respect to time. From these comparison analyses an insight is gained on the contribution of winds to ascent subsystems flight parameters.

  5. Time to angiographic reperfusion in acute ischemic stroke: decision analysis.

    PubMed

    Vagal, Achala S; Khatri, Pooja; Broderick, Joseph P; Tomsick, Thomas A; Yeatts, Sharon D; Eckman, Mark H

    2014-12-01

    Our objective was to use decision analytic modeling to compare 2 treatment strategies of intravenous recombinant tissue-type plasminogen activator (r-tPA) alone versus combined intravenous r-tPA/endovascular therapy in a subgroup of patients with large vessel (internal carotid artery terminus, M1, and M2) occlusion based on varying times to angiographic reperfusion and varying rates of reperfusion. We developed a decision model using Interventional Management of Stroke (IMS) III trial data and comprehensive literature review. We performed 1-way sensitivity analyses for time to reperfusion and 2-way sensitivity for time to reperfusion and rate of reperfusion success. We also performed probabilistic sensitivity analyses to address uncertainty in total time to reperfusion for the endovascular approach. In the base case, endovascular approach yielded a higher expected utility (6.38 quality-adjusted life years) than the intravenous-only arm (5.42 quality-adjusted life years). One-way sensitivity analyses demonstrated superiority of endovascular treatment to intravenous-only arm unless time to reperfusion exceeded 347 minutes. Two-way sensitivity analysis demonstrated that endovascular treatment was preferred when probability of reperfusion is high and time to reperfusion is small. Probabilistic sensitivity results demonstrated an average gain for endovascular therapy of 0.76 quality-adjusted life years (SD 0.82) compared with the intravenous-only approach. In our post hoc model with its underlying limitations, endovascular therapy after intravenous r-tPA is the preferred treatment as compared with intravenous r-tPA alone. However, if time to reperfusion exceeds 347 minutes, intravenous r-tPA alone is the recommended strategy. This warrants validation in a randomized, prospective trial among patients with large vessel occlusions. © 2014 American Heart Association, Inc.

  6. Simulation-based sensitivity analysis for non-ignorably missing data.

    PubMed

    Yin, Peng; Shi, Jian Q

    2017-01-01

    Sensitivity analysis is popular in dealing with missing data problems particularly for non-ignorable missingness, where full-likelihood method cannot be adopted. It analyses how sensitively the conclusions (output) may depend on assumptions or parameters (input) about missing data, i.e. missing data mechanism. We call models with the problem of uncertainty sensitivity models. To make conventional sensitivity analysis more useful in practice we need to define some simple and interpretable statistical quantities to assess the sensitivity models and make evidence based analysis. We propose a novel approach in this paper on attempting to investigate the possibility of each missing data mechanism model assumption, by comparing the simulated datasets from various MNAR models with the observed data non-parametrically, using the K-nearest-neighbour distances. Some asymptotic theory has also been provided. A key step of this method is to plug in a plausibility evaluation system towards each sensitivity parameter, to select plausible values and reject unlikely values, instead of considering all proposed values of sensitivity parameters as in the conventional sensitivity analysis method. The method is generic and has been applied successfully to several specific models in this paper including meta-analysis model with publication bias, analysis of incomplete longitudinal data and mean estimation with non-ignorable missing data.

  7. Taxometric and Factor Analytic Models of Anxiety Sensitivity: Integrating Approaches to Latent Structural Research

    ERIC Educational Resources Information Center

    Bernstein, Amit; Zvolensky, Michael J.; Norton, Peter J.; Schmidt, Norman B.; Taylor, Steven; Forsyth, John P.; Lewis, Sarah F.; Feldner, Matthew T.; Leen-Feldner, Ellen W.; Stewart, Sherry H.; Cox, Brian

    2007-01-01

    This study represents an effort to better understand the latent structure of anxiety sensitivity (AS), as indexed by the 16-item Anxiety Sensitivity Index (ASI; S. Reiss, R. A. Peterson, M. Gursky, & R. J. McNally, 1986), by using taxometric and factor-analytic approaches in an integrative manner. Taxometric analyses indicated that AS has a…

  8. Evaluating trade-offs in bull trout reintroduction strategies using structured decision making

    USGS Publications Warehouse

    Brignon, William R.; Peterson, James T.; Dunham, Jason B.; Schaller, Howard A.; Schreck, Carl B.

    2018-01-01

    Structured decision making allows reintroduction decisions to be made despite uncertainty by linking reintroduction goals with alternative management actions through predictive models of ecological processes. We developed a decision model to evaluate the trade-offs between six bull trout (Salvelinus confluentus) reintroduction decisions with the goal of maximizing the number of adults in the recipient population without reducing the donor population to an unacceptable level. Sensitivity analyses suggested that the decision identity and outcome were most influenced by survival parameters that result in increased adult abundance in the recipient population, increased juvenile survival in the donor and recipient populations, adult fecundity rates, and sex ratio. The decision was least sensitive to survival parameters associated with the captive-reared population, the effect of naivety on released individuals, and juvenile carrying capacity of the reintroduced population. The model and sensitivity analyses can serve as the foundation for formal adaptive management and improved effectiveness, efficiency, and transparency of bull trout reintroduction decisions.

  9. Dandelions, tulips and orchids: evidence for the existence of low-sensitive, medium-sensitive and high-sensitive individuals.

    PubMed

    Lionetti, Francesca; Aron, Arthur; Aron, Elaine N; Burns, G Leonard; Jagiellowicz, Jadzia; Pluess, Michael

    2018-01-22

    According to empirical studies and recent theories, people differ substantially in their reactivity or sensitivity to environmental influences with some being generally more affected than others. More sensitive individuals have been described as orchids and less-sensitive ones as dandelions. Applying a data-driven approach, we explored the existence of sensitivity groups in a sample of 906 adults who completed the highly sensitive person (HSP) scale. According to factor analyses, the HSP scale reflects a bifactor model with a general sensitivity factor. In contrast to prevailing theories, latent class analyses consistently suggested the existence of three rather than two groups. While we were able to identify a highly sensitive (orchids, 31%) and a low-sensitive group (dandelions, 29%), we also detected a third group (40%) characterised by medium sensitivity, which we refer to as tulips in keeping with the flower metaphor. Preliminary cut-off scores for all three groups are provided. In order to characterise the different sensitivity groups, we investigated group differences regarding the Big Five personality traits, as well as experimentally assessed emotional reactivity in an additional independent sample. According to these follow-up analyses, the three groups differed in neuroticism, extraversion and emotional reactivity to positive mood induction with orchids scoring significantly higher in neuroticism and emotional reactivity and lower in extraversion than the other two groups (dandelions also differed significantly from tulips). Findings suggest that environmental sensitivity is a continuous and normally distributed trait but that people fall into three distinct sensitive groups along a sensitivity continuum.

  10. Sensitivity analyses for simulating pesticide impacts on honey bee colonies

    USDA-ARS?s Scientific Manuscript database

    We employ Monte Carlo simulation and sensitivity analysis techniques to describe the population dynamics of pesticide exposure to a honey bee colony using the VarroaPop+Pesticide model. Simulations are performed of hive population trajectories with and without pesticide exposure to determine the eff...

  11. Demographic origins of skewed operational and adult sex ratios: perturbation analyses of two-sex models.

    PubMed

    Veran, Sophie; Beissinger, Steven R

    2009-02-01

    Skewed sex ratios - operational (OSR) and Adult (ASR) - arise from sexual differences in reproductive behaviours and adult survival rates due to the cost of reproduction. However, skewed sex-ratio at birth, sex-biased dispersal and immigration, and sexual differences in juvenile mortality may also contribute. We present a framework to decompose the roles of demographic traits on sex ratios using perturbation analyses of two-sex matrix population models. Metrics of sensitivity are derived from analyses of sensitivity, elasticity, life-table response experiments and life stage simulation analyses, and applied to the stable stage distribution instead of lambda. We use these approaches to examine causes of male-biased sex ratios in two populations of green-rumped parrotlets (Forpus passerinus) in Venezuela. Female local juvenile survival contributed the most to the unbalanced OSR and ASR due to a female-biased dispersal rate, suggesting sexual differences in philopatry can influence sex ratios more strongly than the cost of reproduction.

  12. Marginal Utility of Conditional Sensitivity Analyses for Dynamic Models

    EPA Science Inventory

    Background/Question/MethodsDynamic ecological processes may be influenced by many factors. Simulation models thatmimic these processes often have complex implementations with many parameters. Sensitivityanalyses are subsequently used to identify critical parameters whose uncertai...

  13. A sensitivity analysis of regional and small watershed hydrologic models

    NASA Technical Reports Server (NTRS)

    Ambaruch, R.; Salomonson, V. V.; Simmons, J. W.

    1975-01-01

    Continuous simulation models of the hydrologic behavior of watersheds are important tools in several practical applications such as hydroelectric power planning, navigation, and flood control. Several recent studies have addressed the feasibility of using remote earth observations as sources of input data for hydrologic models. The objective of the study reported here was to determine how accurately remotely sensed measurements must be to provide inputs to hydrologic models of watersheds, within the tolerances needed for acceptably accurate synthesis of streamflow by the models. The study objective was achieved by performing a series of sensitivity analyses using continuous simulation models of three watersheds. The sensitivity analysis showed quantitatively how variations in each of 46 model inputs and parameters affect simulation accuracy with respect to five different performance indices.

  14. Parameterization and Sensitivity Analysis of a Complex Simulation Model for Mosquito Population Dynamics, Dengue Transmission, and Their Control

    PubMed Central

    Ellis, Alicia M.; Garcia, Andres J.; Focks, Dana A.; Morrison, Amy C.; Scott, Thomas W.

    2011-01-01

    Models can be useful tools for understanding the dynamics and control of mosquito-borne disease. More detailed models may be more realistic and better suited for understanding local disease dynamics; however, evaluating model suitability, accuracy, and performance becomes increasingly difficult with greater model complexity. Sensitivity analysis is a technique that permits exploration of complex models by evaluating the sensitivity of the model to changes in parameters. Here, we present results of sensitivity analyses of two interrelated complex simulation models of mosquito population dynamics and dengue transmission. We found that dengue transmission may be influenced most by survival in each life stage of the mosquito, mosquito biting behavior, and duration of the infectious period in humans. The importance of these biological processes for vector-borne disease models and the overwhelming lack of knowledge about them make acquisition of relevant field data on these biological processes a top research priority. PMID:21813844

  15. IMPROVING PARTICULATE MATTER SOURCE APPORTIONMENT FOR HEALTH STUDIES: A TRAINED RECEPTOR MODELING APPROACH WITH SENSITIVITY, UNCERTAINTY AND SPATIAL ANALYSES

    EPA Science Inventory

    An approach for conducting PM source apportionment will be developed, tested, and applied that directly addresses limitations in current SA methods, in particular variability, biases, and intensive resource requirements. Uncertainties in SA results and sensitivities to SA inpu...

  16. Sensitivity analyses of factors influencing CMAQ performance for fine particulate nitrate.

    PubMed

    Shimadera, Hikari; Hayami, Hiroshi; Chatani, Satoru; Morino, Yu; Mori, Yasuaki; Morikawa, Tazuko; Yamaji, Kazuyo; Ohara, Toshimasa

    2014-04-01

    Improvement of air quality models is required so that they can be utilized to design effective control strategies for fine particulate matter (PM2.5). The Community Multiscale Air Quality modeling system was applied to the Greater Tokyo Area of Japan in winter 2010 and summer 2011. The model results were compared with observed concentrations of PM2.5 sulfate (SO4(2-)), nitrate (NO3(-)) and ammonium, and gaseous nitric acid (HNO3) and ammonia (NH3). The model approximately reproduced PM2.5 SO4(2-) concentration, but clearly overestimated PM2.5 NO3(-) concentration, which was attributed to overestimation of production of ammonium nitrate (NH4NO3). This study conducted sensitivity analyses of factors associated with the model performance for PM2.5 NO3(-) concentration, including temperature and relative humidity, emission of nitrogen oxides, seasonal variation of NH3 emission, HNO3 and NH3 dry deposition velocities, and heterogeneous reaction probability of dinitrogen pentoxide. Change in NH3 emission directly affected NH3 concentration, and substantially affected NH4NO3 concentration. Higher dry deposition velocities of HNO3 and NH3 led to substantial reductions of concentrations of the gaseous species and NH4NO3. Because uncertainties in NH3 emission and dry deposition processes are probably large, these processes may be key factors for improvement of the model performance for PM2.5 NO3(-). The Community Multiscale Air Quality modeling system clearly overestimated the concentration of fine particulate nitrate in the Greater Tokyo Area of Japan, which was attributed to overestimation of production of ammonium nitrate. Sensitivity analyses were conducted for factors associated with the model performance for nitrate. Ammonia emission and dry deposition of nitric acid and ammonia may be key factors for improvement of the model performance.

  17. Uncertainty Analysis of Decomposing Polyurethane Foam

    NASA Technical Reports Server (NTRS)

    Hobbs, Michael L.; Romero, Vicente J.

    2000-01-01

    Sensitivity/uncertainty analyses are necessary to determine where to allocate resources for improved predictions in support of our nation's nuclear safety mission. Yet, sensitivity/uncertainty analyses are not commonly performed on complex combustion models because the calculations are time consuming, CPU intensive, nontrivial exercises that can lead to deceptive results. To illustrate these ideas, a variety of sensitivity/uncertainty analyses were used to determine the uncertainty associated with thermal decomposition of polyurethane foam exposed to high radiative flux boundary conditions. The polyurethane used in this study is a rigid closed-cell foam used as an encapsulant. Related polyurethane binders such as Estane are used in many energetic materials of interest to the JANNAF community. The complex, finite element foam decomposition model used in this study has 25 input parameters that include chemistry, polymer structure, and thermophysical properties. The response variable was selected as the steady-state decomposition front velocity calculated as the derivative of the decomposition front location versus time. An analytical mean value sensitivity/uncertainty (MV) analysis was used to determine the standard deviation by taking numerical derivatives of the response variable with respect to each of the 25 input parameters. Since the response variable is also a derivative, the standard deviation was essentially determined from a second derivative that was extremely sensitive to numerical noise. To minimize the numerical noise, 50-micrometer element dimensions and approximately 1-msec time steps were required to obtain stable uncertainty results. As an alternative method to determine the uncertainty and sensitivity in the decomposition front velocity, surrogate response surfaces were generated for use with a constrained Latin Hypercube Sampling (LHS) technique. Two surrogate response surfaces were investigated: 1) a linear surrogate response surface (LIN) and 2) a quadratic response surface (QUAD). The LHS techniques do not require derivatives of the response variable and are subsequently relatively insensitive to numerical noise. To compare the LIN and QUAD methods to the MV method, a direct LHS analysis (DLHS) was performed using the full grid and timestep resolved finite element model. The surrogate response models (LIN and QUAD) are shown to give acceptable values of the mean and standard deviation when compared to the fully converged DLHS model.

  18. The sensitivity of the ESA DELTA model

    NASA Astrophysics Data System (ADS)

    Martin, C.; Walker, R.; Klinkrad, H.

    Long-term debris environment models play a vital role in furthering our understanding of the future debris environment, and in aiding the determination of a strategy to preserve the Earth orbital environment for future use. By their very nature these models have to make certain assumptions to enable informative future projections to be made. Examples of these assumptions include the projection of future traffic, including launch and explosion rates, and the methodology used to simulate break-up events. To ensure a sound basis for future projections, and consequently for assessing the effectiveness of various mitigation measures, it is essential that the sensitivity of these models to variations in key assumptions is examined. The DELTA (Debris Environment Long Term Analysis) model, developed by QinetiQ for the European Space Agency, allows the future projection of the debris environment throughout Earth orbit. Extensive analyses with this model have been performed under the auspices of the ESA Space Debris Mitigation Handbook and following the recent upgrade of the model to DELTA 3.0. This paper draws on these analyses to present the sensitivity of the DELTA model to changes in key model parameters and assumptions. Specifically the paper will address the variation in future traffic rates, including the deployment of satellite constellations, and the variation in the break-up model and criteria used to simulate future explosion and collision events.

  19. UNCERTAINTY AND SENSITIVITY ANALYSES FOR VERY HIGH ORDER MODELS

    EPA Science Inventory

    While there may in many cases be high potential for exposure of humans and ecosystems to chemicals released from a source, the degree to which this potential is realized is often uncertain. Conceptually, uncertainties are divided among parameters, model, and modeler during simula...

  20. Meta-epidemiologic study showed frequent time trends in summary estimates from meta-analyses of diagnostic accuracy studies.

    PubMed

    Cohen, Jérémie F; Korevaar, Daniël A; Wang, Junfeng; Leeflang, Mariska M; Bossuyt, Patrick M

    2016-09-01

    To evaluate changes over time in summary estimates from meta-analyses of diagnostic accuracy studies. We included 48 meta-analyses from 35 MEDLINE-indexed systematic reviews published between September 2011 and January 2012 (743 diagnostic accuracy studies; 344,015 participants). Within each meta-analysis, we ranked studies by publication date. We applied random-effects cumulative meta-analysis to follow how summary estimates of sensitivity and specificity evolved over time. Time trends were assessed by fitting a weighted linear regression model of the summary accuracy estimate against rank of publication. The median of the 48 slopes was -0.02 (-0.08 to 0.03) for sensitivity and -0.01 (-0.03 to 0.03) for specificity. Twelve of 96 (12.5%) time trends in sensitivity or specificity were statistically significant. We found a significant time trend in at least one accuracy measure for 11 of the 48 (23%) meta-analyses. Time trends in summary estimates are relatively frequent in meta-analyses of diagnostic accuracy studies. Results from early meta-analyses of diagnostic accuracy studies should be considered with caution. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Thermodynamic modeling of transcription: sensitivity analysis differentiates biological mechanism from mathematical model-induced effects.

    PubMed

    Dresch, Jacqueline M; Liu, Xiaozhou; Arnosti, David N; Ay, Ahmet

    2010-10-24

    Quantitative models of gene expression generate parameter values that can shed light on biological features such as transcription factor activity, cooperativity, and local effects of repressors. An important element in such investigations is sensitivity analysis, which determines how strongly a model's output reacts to variations in parameter values. Parameters of low sensitivity may not be accurately estimated, leading to unwarranted conclusions. Low sensitivity may reflect the nature of the biological data, or it may be a result of the model structure. Here, we focus on the analysis of thermodynamic models, which have been used extensively to analyze gene transcription. Extracted parameter values have been interpreted biologically, but until now little attention has been given to parameter sensitivity in this context. We apply local and global sensitivity analyses to two recent transcriptional models to determine the sensitivity of individual parameters. We show that in one case, values for repressor efficiencies are very sensitive, while values for protein cooperativities are not, and provide insights on why these differential sensitivities stem from both biological effects and the structure of the applied models. In a second case, we demonstrate that parameters that were thought to prove the system's dependence on activator-activator cooperativity are relatively insensitive. We show that there are numerous parameter sets that do not satisfy the relationships proferred as the optimal solutions, indicating that structural differences between the two types of transcriptional enhancers analyzed may not be as simple as altered activator cooperativity. Our results emphasize the need for sensitivity analysis to examine model construction and forms of biological data used for modeling transcriptional processes, in order to determine the significance of estimated parameter values for thermodynamic models. Knowledge of parameter sensitivities can provide the necessary context to determine how modeling results should be interpreted in biological systems.

  2. Greenland Regional and Ice Sheet-wide Geometry Sensitivity to Boundary and Initial conditions

    NASA Astrophysics Data System (ADS)

    Logan, L. C.; Narayanan, S. H. K.; Greve, R.; Heimbach, P.

    2017-12-01

    Ice sheet and glacier model outputs require inputs from uncertainly known initial and boundary conditions, and other parameters. Conservation and constitutive equations formalize the relationship between model inputs and outputs, and the sensitivity of model-derived quantities of interest (e.g., ice sheet volume above floatation) to model variables can be obtained via the adjoint model of an ice sheet. We show how one particular ice sheet model, SICOPOLIS (SImulation COde for POLythermal Ice Sheets), depends on these inputs through comprehensive adjoint-based sensitivity analyses. SICOPOLIS discretizes the shallow-ice and shallow-shelf approximations for ice flow, and is well-suited for paleo-studies of Greenland and Antarctica, among other computational domains. The adjoint model of SICOPOLIS was developed via algorithmic differentiation, facilitated by the source transformation tool OpenAD (developed at Argonne National Lab). While model sensitivity to various inputs can be computed by costly methods involving input perturbation simulations, the time-dependent adjoint model of SICOPOLIS delivers model sensitivities to initial and boundary conditions throughout time at lower cost. Here, we explore both the sensitivities of the Greenland Ice Sheet's entire and regional volumes to: initial ice thickness, precipitation, basal sliding, and geothermal flux over the Holocene epoch. Sensitivity studies such as described here are now accessible to the modeling community, based on the latest version of SICOPOLIS that has been adapted for OpenAD to generate correct and efficient adjoint code.

  3. Comparison of Two Global Sensitivity Analysis Methods for Hydrologic Modeling over the Columbia River Basin

    NASA Astrophysics Data System (ADS)

    Hameed, M.; Demirel, M. C.; Moradkhani, H.

    2015-12-01

    Global Sensitivity Analysis (GSA) approach helps identify the effectiveness of model parameters or inputs and thus provides essential information about the model performance. In this study, the effects of the Sacramento Soil Moisture Accounting (SAC-SMA) model parameters, forcing data, and initial conditions are analysed by using two GSA methods: Sobol' and Fourier Amplitude Sensitivity Test (FAST). The simulations are carried out over five sub-basins within the Columbia River Basin (CRB) for three different periods: one-year, four-year, and seven-year. Four factors are considered and evaluated by using the two sensitivity analysis methods: the simulation length, parameter range, model initial conditions, and the reliability of the global sensitivity analysis methods. The reliability of the sensitivity analysis results is compared based on 1) the agreement between the two sensitivity analysis methods (Sobol' and FAST) in terms of highlighting the same parameters or input as the most influential parameters or input and 2) how the methods are cohered in ranking these sensitive parameters under the same conditions (sub-basins and simulation length). The results show the coherence between the Sobol' and FAST sensitivity analysis methods. Additionally, it is found that FAST method is sufficient to evaluate the main effects of the model parameters and inputs. Another conclusion of this study is that the smaller parameter or initial condition ranges, the more consistency and coherence between the sensitivity analysis methods results.

  4. Revenue Potential for Inpatient IR Consultation Services: A Financial Model.

    PubMed

    Misono, Alexander S; Mueller, Peter R; Hirsch, Joshua A; Sheridan, Robert M; Siddiqi, Assad U; Liu, Raymond W

    2016-05-01

    Interventional radiology (IR) has historically failed to fully capture the value of evaluation and management services in the inpatient setting. Understanding financial benefits of a formally incorporated billing discipline may yield meaningful insights for interventional practices. A revenue modeling tool was created deploying standard financial modeling techniques, including sensitivity and scenario analyses. Sensitivity analysis calculates revenue fluctuation related to dynamic adjustment of discrete variables. In scenario analysis, possible future scenarios as well as revenue potential of different-size clinical practices are modeled. Assuming a hypothetical inpatient IR consultation service with a daily patient census of 35 patients and two new consults per day, the model estimates annual charges of $2.3 million and collected revenue of $390,000. Revenues are most sensitive to provider billing documentation rates and patient volume. A range of realistic scenarios-from cautious to optimistic-results in a range of annual charges of $1.8 million to $2.7 million and a collected revenue range of $241,000 to $601,000. Even a small practice with a daily patient census of 5 and 0.20 new consults per day may expect annual charges of $320,000 and collected revenue of $55,000. A financial revenue modeling tool is a powerful adjunct in understanding economics of an inpatient IR consultation service. Sensitivity and scenario analyses demonstrate a wide range of revenue potential and uncover levers for financial optimization. Copyright © 2016 SIR. Published by Elsevier Inc. All rights reserved.

  5. Methods for Probabilistic Radiological Dose Assessment at a High-Level Radioactive Waste Repository.

    NASA Astrophysics Data System (ADS)

    Maheras, Steven James

    Methods were developed to assess and evaluate the uncertainty in offsite and onsite radiological dose at a high-level radioactive waste repository to show reasonable assurance that compliance with applicable regulatory requirements will be achieved. Uncertainty in offsite dose was assessed by employing a stochastic precode in conjunction with Monte Carlo simulation using an offsite radiological dose assessment code. Uncertainty in onsite dose was assessed by employing a discrete-event simulation model of repository operations in conjunction with an occupational radiological dose assessment model. Complementary cumulative distribution functions of offsite and onsite dose were used to illustrate reasonable assurance. Offsite dose analyses were performed for iodine -129, cesium-137, strontium-90, and plutonium-239. Complementary cumulative distribution functions of offsite dose were constructed; offsite dose was lognormally distributed with a two order of magnitude range. However, plutonium-239 results were not lognormally distributed and exhibited less than one order of magnitude range. Onsite dose analyses were performed for the preliminary inspection, receiving and handling, and the underground areas of the repository. Complementary cumulative distribution functions of onsite dose were constructed and exhibited less than one order of magnitude range. A preliminary sensitivity analysis of the receiving and handling areas was conducted using a regression metamodel. Sensitivity coefficients and partial correlation coefficients were used as measures of sensitivity. Model output was most sensitive to parameters related to cask handling operations. Model output showed little sensitivity to parameters related to cask inspections.

  6. Sensitivity of atmospheric aerosol scavenging to precipitation intensity and frequency in the context of global climate change

    NASA Astrophysics Data System (ADS)

    Hou, Pei; Wu, Shiliang; McCarty, Jessica L.; Gao, Yang

    2018-06-01

    Wet deposition driven by precipitation is an important sink for atmospheric aerosols and soluble gases. We investigate the sensitivity of atmospheric aerosol lifetimes to precipitation intensity and frequency in the context of global climate change. Our sensitivity model simulations, through some simplified perturbations to precipitation in the GEOS-Chem model, show that the removal efficiency and hence the atmospheric lifetime of aerosols have significantly higher sensitivities to precipitation frequencies than to precipitation intensities, indicating that the same amount of precipitation may lead to different removal efficiencies of atmospheric aerosols. Combining the long-term trends of precipitation patterns for various regions with the sensitivities of atmospheric aerosol lifetimes to various precipitation characteristics allows us to examine the potential impacts of precipitation changes on atmospheric aerosols. Analyses based on an observational dataset show that precipitation frequencies in some regions have decreased in the past 14 years, which might increase the atmospheric aerosol lifetimes in those regions. Similar analyses based on multiple reanalysis meteorological datasets indicate that the changes of precipitation intensity and frequency over the past 30 years can lead to perturbations in the atmospheric aerosol lifetimes by 10 % or higher at the regional scale.

  7. AN EXAMPLE OF MODEL STRUCTURE DIFFERENCES USING SENSITIVITY ANALYSES IN PHYSIOLOGICALLY BASED PHARMACOKINETIC MODELS OF TRICHLOROETHYLENE IN HUMANS

    EPA Science Inventory

    Abstract Trichloroethylene (TCE) is an industrial chemical and an environmental contaminant. TCE and its metabolites may be carcinogenic and affect human health. Physiologically based pharmacokinetic (PBPK) models that differ in compartmentalization are developed for TCE metabo...

  8. Sensitivity Analysis of the Land Surface Model NOAH-MP for Different Model Fluxes

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Thober, Stephan; Samaniego, Luis; Branch, Oliver; Wulfmeyer, Volker; Clark, Martyn; Attinger, Sabine; Kumar, Rohini; Cuntz, Matthias

    2015-04-01

    Land Surface Models (LSMs) use a plenitude of process descriptions to represent the carbon, energy and water cycles. They are highly complex and computationally expensive. Practitioners, however, are often only interested in specific outputs of the model such as latent heat or surface runoff. In model applications like parameter estimation, the most important parameters are then chosen by experience or expert knowledge. Hydrologists interested in surface runoff therefore chose mostly soil parameters while biogeochemists interested in carbon fluxes focus on vegetation parameters. However, this might lead to the omission of parameters that are important, for example, through strong interactions with the parameters chosen. It also happens during model development that some process descriptions contain fixed values, which are supposedly unimportant parameters. However, these hidden parameters remain normally undetected although they might be highly relevant during model calibration. Sensitivity analyses are used to identify informative model parameters for a specific model output. Standard methods for sensitivity analysis such as Sobol indexes require large amounts of model evaluations, specifically in case of many model parameters. We hence propose to first use a recently developed inexpensive sequential screening method based on Elementary Effects that has proven to identify the relevant informative parameters. This reduces the number parameters and therefore model evaluations for subsequent analyses such as sensitivity analysis or model calibration. In this study, we quantify parametric sensitivities of the land surface model NOAH-MP that is a state-of-the-art LSM and used at regional scale as the land surface scheme of the atmospheric Weather Research and Forecasting Model (WRF). NOAH-MP contains multiple process parameterizations yielding a considerable amount of parameters (˜ 100). Sensitivities for the three model outputs (a) surface runoff, (b) soil drainage and (c) latent heat are calculated on twelve Model Parameter Estimation Experiment (MOPEX) catchments ranging in size from 1020 to 4421 km2. This allows investigation of parametric sensitivities for distinct hydro-climatic characteristics, emphasizing different land-surface processes. The sequential screening identifies the most informative parameters of NOAH-MP for different model output variables. The number of parameters is reduced substantially for all of the three model outputs to approximately 25. The subsequent Sobol method quantifies the sensitivities of these informative parameters. The study demonstrates the existence of sensitive, important parameters in almost all parts of the model irrespective of the considered output. Soil parameters, e.g., are informative for all three output variables whereas plant parameters are not only informative for latent heat but also for soil drainage because soil drainage is strongly coupled to transpiration through the soil water balance. These results contrast to the choice of only soil parameters in hydrological studies and only plant parameters in biogeochemical ones. The sequential screening identified several important hidden parameters that carry large sensitivities and have hence to be included during model calibration.

  9. SIMULATING RADIONUCLIDE FATE AND TRANSPORT IN THE UNSATURATED ZONE: EVALUATION AND SENSITIVITY ANALYSES OF SELECT COMPUTER MODELS

    EPA Science Inventory

    Numerical, mathematical models of water and chemical movement in soils are used as decision aids for determining soil screening levels (SSLs) of radionuclides in the unsaturated zone. Many models require extensive input parameters which include uncertainty due to soil variabil...

  10. The Impact of Spring Subsurface Soil Temperature Anomaly in the Western U.S. on North American Summer Precipitation: A Case Study Using Regional Climate Model Downscaling

    DTIC Science & Technology

    2012-06-02

    regional climate model downscaling , J. Geophys. Res., 117, D11103, doi:10.1029/2012JD017692. 1. Introduction [2] Modeling studies and data analyses...based on ground and satellite data have demonstrated that the land surface state variables, such as soil moisture, snow, vegetation, and soil temperature... downscaling rather than simply applying reanal- ysis data as LBC for both Eta control and sensitivity experiments as done in many RCM sensitivity studies

  11. Sensitivity of water resources in the Delaware River basin to climate variability and change

    USGS Publications Warehouse

    Ayers, Mark A.; Wolock, David M.; McCabe, Gregory J.; Hay, Lauren E.; Tasker, Gary D.

    1994-01-01

    Because of the greenhouse effect, projected increases in atmospheric carbon dioxide levels might cause global warming, which in turn could result in changes in precipitation patterns and evapotranspiration and in increases in sea level. This report describes the greenhouse effect; discusses the problems and uncertainties associated with the detection, prediction, and effects of climate change; and presents the results of sensitivity analyses of how climate change might affect water resources in the Delaware River basin. Sensitivity analyses suggest that potentially serious shortfalls of certain water resources in the basin could result if some scenarios for climate change come true . The results of model simulations of the basin streamflow demonstrate the difficulty in distinguishing the effects that climate change versus natural climate variability have on streamflow and water supply . The future direction of basin changes in most water resources, furthermore, cannot be precisely determined because of uncertainty in current projections of regional temperature and precipitation . This large uncertainty indicates that, for resource planning, information defining the sensitivities of water resources to a range of climate change is most relevant . The sensitivity analyses could be useful in developing contingency plans for evaluating and responding to changes, should they occur.

  12. Computationally inexpensive identification of noninformative model parameters by sequential screening

    NASA Astrophysics Data System (ADS)

    Cuntz, Matthias; Mai, Juliane; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis

    2015-08-01

    Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.

  13. Computationally inexpensive identification of noninformative model parameters by sequential screening

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Cuntz, Matthias; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis

    2016-04-01

    Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.

  14. Expanding the occupational health methodology: A concatenated artificial neural network approach to model the burnout process in Chinese nurses.

    PubMed

    Ladstätter, Felix; Garrosa, Eva; Moreno-Jiménez, Bernardo; Ponsoda, Vicente; Reales Aviles, José Manuel; Dai, Junming

    2016-01-01

    Artificial neural networks are sophisticated modelling and prediction tools capable of extracting complex, non-linear relationships between predictor (input) and predicted (output) variables. This study explores this capacity by modelling non-linearities in the hardiness-modulated burnout process with a neural network. Specifically, two multi-layer feed-forward artificial neural networks are concatenated in an attempt to model the composite non-linear burnout process. Sensitivity analysis, a Monte Carlo-based global simulation technique, is then utilised to examine the first-order effects of the predictor variables on the burnout sub-dimensions and consequences. Results show that (1) this concatenated artificial neural network approach is feasible to model the burnout process, (2) sensitivity analysis is a prolific method to study the relative importance of predictor variables and (3) the relationships among variables involved in the development of burnout and its consequences are to different degrees non-linear. Many relationships among variables (e.g., stressors and strains) are not linear, yet researchers use linear methods such as Pearson correlation or linear regression to analyse these relationships. Artificial neural network analysis is an innovative method to analyse non-linear relationships and in combination with sensitivity analysis superior to linear methods.

  15. Sobol‧ sensitivity analysis of NAPL-contaminated aquifer remediation process based on multiple surrogates

    NASA Astrophysics Data System (ADS)

    Luo, Jiannan; Lu, Wenxi

    2014-06-01

    Sobol‧ sensitivity analyses based on different surrogates were performed on a trichloroethylene (TCE)-contaminated aquifer to assess the sensitivity of the design variables of remediation duration, surfactant concentration and injection rates at four wells to remediation efficiency First, the surrogate models of a multi-phase flow simulation model were constructed by applying radial basis function artificial neural network (RBFANN) and Kriging methods, and the two models were then compared. Based on the developed surrogate models, the Sobol‧ method was used to calculate the sensitivity indices of the design variables which affect the remediation efficiency. The coefficient of determination (R2) and the mean square error (MSE) of these two surrogate models demonstrated that both models had acceptable approximation accuracy, furthermore, the approximation accuracy of the Kriging model was slightly better than that of the RBFANN model. Sobol‧ sensitivity analysis results demonstrated that the remediation duration was the most important variable influencing remediation efficiency, followed by rates of injection at wells 1 and 3, while rates of injection at wells 2 and 4 and the surfactant concentration had negligible influence on remediation efficiency. In addition, high-order sensitivity indices were all smaller than 0.01, which indicates that interaction effects of these six factors were practically insignificant. The proposed Sobol‧ sensitivity analysis based on surrogate is an effective tool for calculating sensitivity indices, because it shows the relative contribution of the design variables (individuals and interactions) to the output performance variability with a limited number of runs of a computationally expensive simulation model. The sensitivity analysis results lay a foundation for the optimal groundwater remediation process optimization.

  16. A sediment graph model based on SCS-CN method

    NASA Astrophysics Data System (ADS)

    Singh, P. K.; Bhunya, P. K.; Mishra, S. K.; Chaube, U. C.

    2008-01-01

    SummaryThis paper proposes new conceptual sediment graph models based on coupling of popular and extensively used methods, viz., Nash model based instantaneous unit sediment graph (IUSG), soil conservation service curve number (SCS-CN) method, and Power law. These models vary in their complexity and this paper tests their performance using data of the Nagwan watershed (area = 92.46 km 2) (India). The sensitivity of total sediment yield and peak sediment flow rate computations to model parameterisation is analysed. The exponent of the Power law, β, is more sensitive than other model parameters. The models are found to have substantial potential for computing sediment graphs (temporal sediment flow rate distribution) as well as total sediment yield.

  17. Social Regulation of Leukocyte Homeostasis: The Role of Glucocorticoid Sensitivity

    PubMed Central

    Cole, Steve W.

    2010-01-01

    Recent small-scale genomics analyses suggest that physiologic regulation of pro-inflammatory gene expression by endogenous glucocorticoids may be compromised in individuals who experience chronic social isolation. This could potentially contribute to the elevated prevalence of inflammation-related disease previously observed in social isolates. The present study assessed the relationship between leukocyte distributional sensitivity to glucocorticoid regulation and subjective social isolation in a large population-based sample of older adults. Initial analyses confirmed that circulating neutrophil percentages were elevated, and circulating lymphocyte and monocyte percentages were suppressed, in direct proportion to circulating cortisol levels. However, leukocyte distributional sensitivity to endogenous glucocorticoids was abrogated in individuals reporting either occasional or frequent experiences of subjective social isolation. This finding held in both nonparametric univariate analyses and in multivariate linear models controlling for a variety of biological, social, behavioral, and psychological confounders. The present results suggest that social factors may alter immune cell sensitivity to physiologic regulation by the hypothalamic-pituitary-adrenal axis in ways that could ultimately contribute to the increased physical health risks associated with social isolation. PMID:18394861

  18. A multi-model assessment of terrestrial biosphere model data needs

    NASA Astrophysics Data System (ADS)

    Gardella, A.; Cowdery, E.; De Kauwe, M. G.; Desai, A. R.; Duveneck, M.; Fer, I.; Fisher, R.; Knox, R. G.; Kooper, R.; LeBauer, D.; McCabe, T.; Minunno, F.; Raiho, A.; Serbin, S.; Shiklomanov, A. N.; Thomas, A.; Walker, A.; Dietze, M.

    2017-12-01

    Terrestrial biosphere models provide us with the means to simulate the impacts of climate change and their uncertainties. Going beyond direct observation and experimentation, models synthesize our current understanding of ecosystem processes and can give us insight on data needed to constrain model parameters. In previous work, we leveraged the Predictive Ecosystem Analyzer (PEcAn) to assess the contribution of different parameters to the uncertainty of the Ecosystem Demography model v2 (ED) model outputs across various North American biomes (Dietze et al., JGR-G, 2014). While this analysis identified key research priorities, the extent to which these priorities were model- and/or biome-specific was unclear. Furthermore, because the analysis only studied one model, we were unable to comment on the effect of variability in model structure to overall predictive uncertainty. Here, we expand this analysis to all biomes globally and a wide sample of models that vary in complexity: BioCro, CABLE, CLM, DALEC, ED2, FATES, G'DAY, JULES, LANDIS, LINKAGES, LPJ-GUESS, MAESPA, PRELES, SDGVM, SIPNET, and TEM. Prior to performing uncertainty analyses, model parameter uncertainties were assessed by assimilating all available trait data from the combination of the BETYdb and TRY trait databases, using an updated multivariate version of PEcAn's Hierarchical Bayesian meta-analysis. Next, sensitivity analyses were performed for all models across a range of sites globally to assess sensitivities for a range of different outputs (GPP, ET, SH, Ra, NPP, Rh, NEE, LAI) at multiple time scales from the sub-annual to the decadal. Finally, parameter uncertainties and model sensitivities were combined to evaluate the fractional contribution of each parameter to the predictive uncertainty for a specific variable at a specific site and timescale. Facilitated by PEcAn's automated workflows, this analysis represents the broadest assessment of the sensitivities and uncertainties in terrestrial models to date, and provides a comprehensive roadmap for constraining model uncertainties through model development and data collection.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hadgu, Teklu; Appel, Gordon John

    Sandia National Laboratories (SNL) continued evaluation of total system performance assessment (TSPA) computing systems for the previously considered Yucca Mountain Project (YMP). This was done to maintain the operational readiness of the computing infrastructure (computer hardware and software) and knowledge capability for total system performance assessment (TSPA) type analysis, as directed by the National Nuclear Security Administration (NNSA), DOE 2010. This work is a continuation of the ongoing readiness evaluation reported in Lee and Hadgu (2014) and Hadgu et al. (2015). The TSPA computing hardware (CL2014) and storage system described in Hadgu et al. (2015) were used for the currentmore » analysis. One floating license of GoldSim with Versions 9.60.300, 10.5 and 11.1.6 was installed on the cluster head node, and its distributed processing capability was mapped on the cluster processors. Other supporting software were tested and installed to support the TSPA-type analysis on the server cluster. The current tasks included verification of the TSPA-LA uncertainty and sensitivity analyses, and preliminary upgrade of the TSPA-LA from Version 9.60.300 to the latest version 11.1. All the TSPA-LA uncertainty and sensitivity analyses modeling cases were successfully tested and verified for the model reproducibility on the upgraded 2014 server cluster (CL2014). The uncertainty and sensitivity analyses used TSPA-LA modeling cases output generated in FY15 based on GoldSim Version 9.60.300 documented in Hadgu et al. (2015). The model upgrade task successfully converted the Nominal Modeling case to GoldSim Version 11.1. Upgrade of the remaining of the modeling cases and distributed processing tasks will continue. The 2014 server cluster and supporting software systems are fully operational to support TSPA-LA type analysis.« less

  20. Nonlinear analyses and failure patterns of typical masonry school buildings in the epicentral zone of the 2016 Italian earthquakes

    NASA Astrophysics Data System (ADS)

    Clementi, Cristhian; Clementi, Francesco; Lenci, Stefano

    2017-11-01

    The paper discusses the behavior of typical masonry school buildings in the center of Italy built at the end of 1950s without any seismic guidelines. These structures have faced the recent Italian earthquakes in 2016 without diffuse damages. Global numerical models of the building have been built and masonry material has been simulated as nonlinear. Sensitivity analyses are done to evaluate the reliability of the structural models.

  1. A comparison of bivariate, multivariate random-effects, and Poisson correlated gamma-frailty models to meta-analyze individual patient data of ordinal scale diagnostic tests.

    PubMed

    Simoneau, Gabrielle; Levis, Brooke; Cuijpers, Pim; Ioannidis, John P A; Patten, Scott B; Shrier, Ian; Bombardier, Charles H; de Lima Osório, Flavia; Fann, Jesse R; Gjerdingen, Dwenda; Lamers, Femke; Lotrakul, Manote; Löwe, Bernd; Shaaban, Juwita; Stafford, Lesley; van Weert, Henk C P M; Whooley, Mary A; Wittkampf, Karin A; Yeung, Albert S; Thombs, Brett D; Benedetti, Andrea

    2017-11-01

    Individual patient data (IPD) meta-analyses are increasingly common in the literature. In the context of estimating the diagnostic accuracy of ordinal or semi-continuous scale tests, sensitivity and specificity are often reported for a given threshold or a small set of thresholds, and a meta-analysis is conducted via a bivariate approach to account for their correlation. When IPD are available, sensitivity and specificity can be pooled for every possible threshold. Our objective was to compare the bivariate approach, which can be applied separately at every threshold, to two multivariate methods: the ordinal multivariate random-effects model and the Poisson correlated gamma-frailty model. Our comparison was empirical, using IPD from 13 studies that evaluated the diagnostic accuracy of the 9-item Patient Health Questionnaire depression screening tool, and included simulations. The empirical comparison showed that the implementation of the two multivariate methods is more laborious in terms of computational time and sensitivity to user-supplied values compared to the bivariate approach. Simulations showed that ignoring the within-study correlation of sensitivity and specificity across thresholds did not worsen inferences with the bivariate approach compared to the Poisson model. The ordinal approach was not suitable for simulations because the model was highly sensitive to user-supplied starting values. We tentatively recommend the bivariate approach rather than more complex multivariate methods for IPD diagnostic accuracy meta-analyses of ordinal scale tests, although the limited type of diagnostic data considered in the simulation study restricts the generalization of our findings. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Using global sensitivity analysis to understand higher order interactions in complex models: an application of GSA on the Revised Universal Soil Loss Equation (RUSLE) to quantify model sensitivity and implications for ecosystem services management in Costa Rica

    NASA Astrophysics Data System (ADS)

    Fremier, A. K.; Estrada Carmona, N.; Harper, E.; DeClerck, F.

    2011-12-01

    Appropriate application of complex models to estimate system behavior requires understanding the influence of model structure and parameter estimates on model output. To date, most researchers perform local sensitivity analyses, rather than global, because of computational time and quantity of data produced. Local sensitivity analyses are limited in quantifying the higher order interactions among parameters, which could lead to incomplete analysis of model behavior. To address this concern, we performed a GSA on a commonly applied equation for soil loss - the Revised Universal Soil Loss Equation. USLE is an empirical model built on plot-scale data from the USA and the Revised version (RUSLE) includes improved equations for wider conditions, with 25 parameters grouped into six factors to estimate long-term plot and watershed scale soil loss. Despite RUSLE's widespread application, a complete sensitivity analysis has yet to be performed. In this research, we applied a GSA to plot and watershed scale data from the US and Costa Rica to parameterize the RUSLE in an effort to understand the relative importance of model factors and parameters across wide environmental space. We analyzed the GSA results using Random Forest, a statistical approach to evaluate parameter importance accounting for the higher order interactions, and used Classification and Regression Trees to show the dominant trends in complex interactions. In all GSA calculations the management of cover crops (C factor) ranks the highest among factors (compared to rain-runoff erosivity, topography, support practices, and soil erodibility). This is counter to previous sensitivity analyses where the topographic factor was determined to be the most important. The GSA finding is consistent across multiple model runs, including data from the US, Costa Rica, and a synthetic dataset of the widest theoretical space. The three most important parameters were: Mass density of live and dead roots found in the upper inch of soil (C factor), slope angle (L and S factor), and percentage of land area covered by surface cover (C factor). Our findings give further support to the importance of vegetation as a vital ecosystem service provider - soil loss reduction. Concurrent, progress is already been made in Costa Rica, where dam managers are moving forward on a Payment for Ecosystem Services scheme to help keep private lands forested and to improve crop management through targeted investments. Use of complex watershed models, such as RUSLE can help managers quantify the effect of specific land use changes. Moreover, effective land management of vegetation has other important benefits, such as bundled ecosystem services (e.g. pollination, habitat connectivity, etc) and improvements of communities' livelihoods.

  3. Use of simple models to determine wake vortex categories for new aircraft.

    DOT National Transportation Integrated Search

    2015-06-22

    The paper describes how to use simple models and, if needed, sensitivity analyses to determine the wake vortex categories for new aircraft. The methodology provides a tool for the regulators to assess the relative risk of introducing new aircraft int...

  4. Evaluation of Uncertainty and Sensitivity in Environmental Modeling at a Radioactive Waste Management Site

    NASA Astrophysics Data System (ADS)

    Stockton, T. B.; Black, P. K.; Catlett, K. M.; Tauxe, J. D.

    2002-05-01

    Environmental modeling is an essential component in the evaluation of regulatory compliance of radioactive waste management sites (RWMSs) at the Nevada Test Site in southern Nevada, USA. For those sites that are currently operating, further goals are to support integrated decision analysis for the development of acceptance criteria for future wastes, as well as site maintenance, closure, and monitoring. At these RWMSs, the principal pathways for release of contamination to the environment are upward towards the ground surface rather than downwards towards the deep water table. Biotic processes, such as burrow excavation and plant uptake and turnover, dominate this upward transport. A combined multi-pathway contaminant transport and risk assessment model was constructed using the GoldSim modeling platform. This platform facilitates probabilistic analysis of environmental systems, and is especially well suited for assessments involving radionuclide decay chains. The model employs probabilistic definitions of key parameters governing contaminant transport, with the goals of quantifying cumulative uncertainty in the estimation of performance measures and providing information necessary to perform sensitivity analyses. This modeling differs from previous radiological performance assessments (PAs) in that the modeling parameters are intended to be representative of the current knowledge, and the uncertainty in that knowledge, of parameter values rather than reflective of a conservative assessment approach. While a conservative PA may be sufficient to demonstrate regulatory compliance, a parametrically honest PA can also be used for more general site decision-making. In particular, a parametrically honest probabilistic modeling approach allows both uncertainty and sensitivity analyses to be explicitly coupled to the decision framework using a single set of model realizations. For example, sensitivity analysis provides a guide for analyzing the value of collecting more information by quantifying the relative importance of each input parameter in predicting the model response. However, in these complex, high dimensional eco-system models, represented by the RWMS model, the dynamics of the systems can act in a non-linear manner. Quantitatively assessing the importance of input variables becomes more difficult as the dimensionality, the non-linearities, and the non-monotonicities of the model increase. Methods from data mining such as Multivariate Adaptive Regression Splines (MARS) and the Fourier Amplitude Sensitivity Test (FAST) provide tools that can be used in global sensitivity analysis in these high dimensional, non-linear situations. The enhanced interpretability of model output provided by the quantitative measures estimated by these global sensitivity analysis tools will be demonstrated using the RWMS model.

  5. TSUNAMI Primer: A Primer for Sensitivity/Uncertainty Calculations with SCALE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T; Mueller, Don; Bowman, Stephen M

    2009-01-01

    This primer presents examples in the application of the SCALE/TSUNAMI tools to generate k{sub eff} sensitivity data for one- and three-dimensional models using TSUNAMI-1D and -3D and to examine uncertainties in the computed k{sub eff} values due to uncertainties in the cross-section data used in their calculation. The proper use of unit cell data and need for confirming the appropriate selection of input parameters through direct perturbations are described. The uses of sensitivity and uncertainty data to identify and rank potential sources of computational bias in an application system and TSUNAMI tools for assessment of system similarity using sensitivity andmore » uncertainty criteria are demonstrated. Uses of these criteria in trending analyses to assess computational biases, bias uncertainties, and gap analyses are also described. Additionally, an application of the data adjustment tool TSURFER is provided, including identification of specific details of sources of computational bias.« less

  6. Genome-wide bisulfite sensitivity profiling of yeast suggests bisulfite inhibits transcription.

    PubMed

    Segovia, Romulo; Mathew, Veena; Tam, Annie S; Stirling, Peter C

    2017-09-01

    Bisulfite, in the form of sodium bisulfite or metabisulfite, is used commercially as a food preservative. Bisulfite is used in the laboratory as a single-stranded DNA mutagen in epigenomic analyses of DNA methylation. Recently it has also been used on whole yeast cells to induce mutations in exposed single-stranded regions in vivo. To understand the effects of bisulfite on live cells we conducted a genome-wide screen for bisulfite sensitive mutants in yeast. Screening the deletion mutant array, and collections of essential gene mutants we define a genetic network of bisulfite sensitive mutants. Validation of screen hits revealed hyper-sensitivity of transcription and RNA processing mutants, rather than DNA repair pathways and follow-up analyses support a role in perturbation of RNA transactions. We propose a model in which bisulfite-modified nucleotides may interfere with transcription or RNA metabolism when used in vivo. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. The importance of accurate muscle modelling for biomechanical analyses: a case study with a lizard skull

    PubMed Central

    Gröning, Flora; Jones, Marc E. H.; Curtis, Neil; Herrel, Anthony; O'Higgins, Paul; Evans, Susan E.; Fagan, Michael J.

    2013-01-01

    Computer-based simulation techniques such as multi-body dynamics analysis are becoming increasingly popular in the field of skull mechanics. Multi-body models can be used for studying the relationships between skull architecture, muscle morphology and feeding performance. However, to be confident in the modelling results, models need to be validated against experimental data, and the effects of uncertainties or inaccuracies in the chosen model attributes need to be assessed with sensitivity analyses. Here, we compare the bite forces predicted by a multi-body model of a lizard (Tupinambis merianae) with in vivo measurements, using anatomical data collected from the same specimen. This subject-specific model predicts bite forces that are very close to the in vivo measurements and also shows a consistent increase in bite force as the bite position is moved posteriorly on the jaw. However, the model is very sensitive to changes in muscle attributes such as fibre length, intrinsic muscle strength and force orientation, with bite force predictions varying considerably when these three variables are altered. We conclude that accurate muscle measurements are crucial to building realistic multi-body models and that subject-specific data should be used whenever possible. PMID:23614944

  8. Characterizing Uncertainty and Variability in PBPK Models ...

    EPA Pesticide Factsheets

    Mode-of-action based risk and safety assessments can rely upon tissue dosimetry estimates in animals and humans obtained from physiologically-based pharmacokinetic (PBPK) modeling. However, risk assessment also increasingly requires characterization of uncertainty and variability; such characterization for PBPK model predictions represents a continuing challenge to both modelers and users. Current practices show significant progress in specifying deterministic biological models and the non-deterministic (often statistical) models, estimating their parameters using diverse data sets from multiple sources, and using them to make predictions and characterize uncertainty and variability. The International Workshop on Uncertainty and Variability in PBPK Models, held Oct 31-Nov 2, 2006, sought to identify the state-of-the-science in this area and recommend priorities for research and changes in practice and implementation. For the short term, these include: (1) multidisciplinary teams to integrate deterministic and non-deterministic/statistical models; (2) broader use of sensitivity analyses, including for structural and global (rather than local) parameter changes; and (3) enhanced transparency and reproducibility through more complete documentation of the model structure(s) and parameter values, the results of sensitivity and other analyses, and supporting, discrepant, or excluded data. Longer-term needs include: (1) theoretic and practical methodological impro

  9. Meta-analysis of diagnostic accuracy studies in mental health

    PubMed Central

    Takwoingi, Yemisi; Riley, Richard D; Deeks, Jonathan J

    2015-01-01

    Objectives To explain methods for data synthesis of evidence from diagnostic test accuracy (DTA) studies, and to illustrate different types of analyses that may be performed in a DTA systematic review. Methods We described properties of meta-analytic methods for quantitative synthesis of evidence. We used a DTA review comparing the accuracy of three screening questionnaires for bipolar disorder to illustrate application of the methods for each type of analysis. Results The discriminatory ability of a test is commonly expressed in terms of sensitivity (proportion of those with the condition who test positive) and specificity (proportion of those without the condition who test negative). There is a trade-off between sensitivity and specificity, as an increasing threshold for defining test positivity will decrease sensitivity and increase specificity. Methods recommended for meta-analysis of DTA studies --such as the bivariate or hierarchical summary receiver operating characteristic (HSROC) model --jointly summarise sensitivity and specificity while taking into account this threshold effect, as well as allowing for between study differences in test performance beyond what would be expected by chance. The bivariate model focuses on estimation of a summary sensitivity and specificity at a common threshold while the HSROC model focuses on the estimation of a summary curve from studies that have used different thresholds. Conclusions Meta-analyses of diagnostic accuracy studies can provide answers to important clinical questions. We hope this article will provide clinicians with sufficient understanding of the terminology and methods to aid interpretation of systematic reviews and facilitate better patient care. PMID:26446042

  10. Competing risk models in reliability systems, a weibull distribution model with bayesian analysis approach

    NASA Astrophysics Data System (ADS)

    Iskandar, Ismed; Satria Gondokaryono, Yudi

    2016-02-01

    In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range between the true value and the maximum likelihood estimated value lines.

  11. The march from early life food sensitization to allergic disease: a systematic review and meta-analyses of birth cohort studies.

    PubMed

    Alduraywish, S A; Lodge, C J; Campbell, B; Allen, K J; Erbas, B; Lowe, A J; Dharmage, S C

    2016-01-01

    There is growing evidence for an increase in food allergies. The question of whether early life food sensitization, a primary step in food allergies, leads to other allergic disease is a controversial but important issue. Birth cohorts are an ideal design to answer this question. We aimed to systematically investigate and meta-analyse the evidence for associations between early food sensitization and allergic disease in birth cohorts. MEDLINE and SCOPUS databases were searched for birth cohorts that have investigated the association between food sensitization in the first 2 years and subsequent wheeze/asthma, eczema and/or allergic rhinitis. We performed meta-analyses using random-effects models to obtain pooled estimates, stratified by age group. The search yielded fifteen original articles representing thirteen cohorts. Early life food sensitization was associated with an increased risk of infantile eczema, childhood wheeze/asthma, eczema and allergic rhinitis and young adult asthma. Meta-analyses demonstrated that early life food sensitization is related to an increased risk of wheeze/asthma (pooled OR 2.9; 95% CI 2.0-4.0), eczema (pooled OR 2.7; 95% CI 1.7-4.4) and allergic rhinitis (pooled OR 3.1; 95% CI 1.9-4.9) from 4 to 8 years. Food sensitization in the first 2 years of life can identify children at high risk of subsequent allergic disease who may benefit from early life preventive strategies. However, due to potential residual confounding in the majority of studies combined with lack of follow-up into adolescence and adulthood, further research is needed. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  12. INCORPORATING CONCENTRATION DEPENDENCE IN STABLE ISOTOPE MIXING MODELS: A REPLY TO ROBBINS, HILDERBRAND AND FARLEY (2002)

    EPA Science Inventory

    Phillips & Koch (2002) outlined a new stable isotope mixing model which incorporates differences in elemental concentrations in the determinations of source proportions in a mixture. They illustrated their method with sensitivity analyses and two examples from the wildlife ecolog...

  13. A PROBABILISTIC ARSENIC EXPOSURE ASSESSMENT FOR CHILDREN WHO CONTACT CHROMATED COPPER ARSENATE ( CAA )-TREATED PLAYSETS AND DECKS: PART 2 SENSITIVITY AND UNCERTAINTY ANALYSIS

    EPA Science Inventory

    A probabilistic model (SHEDS-Wood) was developed to examine children's exposure and dose to chromated copper arsenate (CCA)-treated wood, as described in Part 1 of this two part paper. This Part 2 paper discusses sensitivity and uncertainty analyses conducted to assess the key m...

  14. Analyses of a heterogeneous lattice hydrodynamic model with low and high-sensitivity vehicles

    NASA Astrophysics Data System (ADS)

    Kaur, Ramanpreet; Sharma, Sapna

    2018-06-01

    Basic lattice model is extended to study the heterogeneous traffic by considering the optimal current difference effect on a unidirectional single lane highway. Heterogeneous traffic consisting of low- and high-sensitivity vehicles is modeled and their impact on stability of mixed traffic flow has been examined through linear stability analysis. The stability of flow is investigated in five distinct regions of the neutral stability diagram corresponding to the amount of higher sensitivity vehicles present on road. In order to investigate the propagating behavior of density waves non linear analysis is performed and near the critical point, the kink antikink soliton is obtained by driving mKdV equation. The effect of fraction parameter corresponding to high sensitivity vehicles is investigated and the results indicates that the stability rise up due to the fraction parameter. The theoretical findings are verified via direct numerical simulation.

  15. VFMA: Topographic Analysis of Sensitivity Data From Full-Field Static Perimetry

    PubMed Central

    Weleber, Richard G.; Smith, Travis B.; Peters, Dawn; Chegarnov, Elvira N.; Gillespie, Scott P.; Francis, Peter J.; Gardiner, Stuart K.; Paetzold, Jens; Dietzsch, Janko; Schiefer, Ulrich; Johnson, Chris A.

    2015-01-01

    Purpose: To analyze static visual field sensitivity with topographic models of the hill of vision (HOV), and to characterize several visual function indices derived from the HOV volume. Methods: A software application, Visual Field Modeling and Analysis (VFMA), was developed for static perimetry data visualization and analysis. Three-dimensional HOV models were generated for 16 healthy subjects and 82 retinitis pigmentosa patients. Volumetric visual function indices, which are measures of quantity and comparable regardless of perimeter test pattern, were investigated. Cross-validation, reliability, and cross-sectional analyses were performed to assess this methodology and compare the volumetric indices to conventional mean sensitivity and mean deviation. Floor effects were evaluated by computer simulation. Results: Cross-validation yielded an overall R2 of 0.68 and index of agreement of 0.89, which were consistent among subject groups, indicating good accuracy. Volumetric and conventional indices were comparable in terms of test–retest variability and discriminability among subject groups. Simulated floor effects did not negatively impact the repeatability of any index, but large floor changes altered the discriminability for regional volumetric indices. Conclusions: VFMA is an effective tool for clinical and research analyses of static perimetry data. Topographic models of the HOV aid the visualization of field defects, and topographically derived indices quantify the magnitude and extent of visual field sensitivity. Translational Relevance: VFMA assists with the interpretation of visual field data from any perimetric device and any test location pattern. Topographic models and volumetric indices are suitable for diagnosis, monitoring of field loss, patient counseling, and endpoints in therapeutic trials. PMID:25938002

  16. Development of an estimation model for the evaluation of the energy requirement of dilute acid pretreatments of biomass.

    PubMed

    Mafe, Oluwakemi A T; Davies, Scott M; Hancock, John; Du, Chenyu

    2015-01-01

    This study aims to develop a mathematical model to evaluate the energy required by pretreatment processes used in the production of second generation ethanol. A dilute acid pretreatment process reported by National Renewable Energy Laboratory (NREL) was selected as an example for the model's development. The energy demand of the pretreatment process was evaluated by considering the change of internal energy of the substances, the reaction energy, the heat lost and the work done to/by the system based on a number of simplifying assumptions. Sensitivity analyses were performed on the solid loading rate, temperature, acid concentration and water evaporation rate. The results from the sensitivity analyses established that the solids loading rate had the most significant impact on the energy demand. The model was then verified with data from the NREL benchmark process. Application of this model on other dilute acid pretreatment processes reported in the literature illustrated that although similar sugar yields were reported by several studies, the energy required by the different pretreatments varied significantly.

  17. Robust artificial neural network for reliability and sensitivity analyses of complex non-linear systems.

    PubMed

    Oparaji, Uchenna; Sheu, Rong-Jiun; Bankhead, Mark; Austin, Jonathan; Patelli, Edoardo

    2017-12-01

    Artificial Neural Networks (ANNs) are commonly used in place of expensive models to reduce the computational burden required for uncertainty quantification, reliability and sensitivity analyses. ANN with selected architecture is trained with the back-propagation algorithm from few data representatives of the input/output relationship of the underlying model of interest. However, different performing ANNs might be obtained with the same training data as a result of the random initialization of the weight parameters in each of the network, leading to an uncertainty in selecting the best performing ANN. On the other hand, using cross-validation to select the best performing ANN based on the ANN with the highest R 2 value can lead to biassing in the prediction. This is as a result of the fact that the use of R 2 cannot determine if the prediction made by ANN is biased. Additionally, R 2 does not indicate if a model is adequate, as it is possible to have a low R 2 for a good model and a high R 2 for a bad model. Hence, in this paper, we propose an approach to improve the robustness of a prediction made by ANN. The approach is based on a systematic combination of identical trained ANNs, by coupling the Bayesian framework and model averaging. Additionally, the uncertainties of the robust prediction derived from the approach are quantified in terms of confidence intervals. To demonstrate the applicability of the proposed approach, two synthetic numerical examples are presented. Finally, the proposed approach is used to perform a reliability and sensitivity analyses on a process simulation model of a UK nuclear effluent treatment plant developed by National Nuclear Laboratory (NNL) and treated in this study as a black-box employing a set of training data as a test case. This model has been extensively validated against plant and experimental data and used to support the UK effluent discharge strategy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. An economic evaluation based on a randomized placebo-controlled trial of varenicline in smokers with cardiovascular disease: results for Belgium, Spain, Portugal, and Italy.

    PubMed

    Wilson, Koo; Hettle, Robert; Marbaix, Sophie; Diaz Cerezo, Silvia; Ines, Monica; Santoni, Laura; Annemans, Lieven; Prignot, Jacques; Lopez de Sa, Esteban

    2012-10-01

    An estimated 17.2% of patients continue to smoke following diagnosis of cardiovascular disease (CVD). To reduce the risk of further morbidity or mortality in cardiovascular patients, smoking cessation has been shown to reduce the risk of mortality by 36% and myocardial infarction by 32%. The objective of this study was to evaluate the long-term health and economic consequences of smoking cessation in patients with CVD. Results of a randomized clinical trial comparing varenicline plus counselling vs. placebo plus counselling were extrapolated using a Markov model to simulate the lifetime costs and health consequences of smoking cessation in patients with stable CVD. For the base case, we considered a payer's perspective including direct costs attributed to the healthcare provider, measuring cumulative life years (LY) and quality adjusted life (QALY) years as outcome measures. Secondary analyses were conducted from a societal perspective, evaluating lost productivity due to premature mortality. Sensitivity and subgroup analyses were also undertaken. Results were analysed for Belgium, Spain, Portugal, and Italy. Varenicline plus counselling was associated with a gain in LY and QALY across all countries; relative to placebo plus counselling. From a payer's perspective, incremental cost effectiveness ratios were € 6120 (Belgium), € 5151 (Spain), € 5357 (Portugal), and € 5433 (Italy) per QALY gained. From a societal perspective, varenicline in addition to counselling was less costly than placebo and counselling in all cases. Sensitivity analyses showed little sensitivity in outcomes to model assumptions or uncertainty in model parameters. Varenicline in addition to counselling is cost-effective compared to placebo and counselling in smokers with CVD.

  19. Development, calibration, and sensitivity analyses of a high-resolution dissolved oxygen mass balance model for the northern Gulf of Mexico

    EPA Science Inventory

    A high-resolution dissolved oxygen mass balance model was developed for the Louisiana coastal shelf in the northern Gulf of Mexico. GoMDOM (Gulf of Mexico Dissolved Oxygen Model) was developed to assist in evaluating the impacts of nutrient loading on hypoxia development and exte...

  20. Southern Forest Resource Assessment Using the Subregional Timber Supply (SRTS) Model

    Treesearch

    Robert C. Abt; Frederick W. Cubbage; Gerardo Pacheco

    2000-01-01

    Most timber supply analyses are focused on broad regions. This paper describes a modeling system that uses a standard empirical framework applied to subregional inventory data in the South. Model results indicate significant within-region variation in supply responses across owners and regions. Projections of southern timber markets indicate that results are sensitive...

  1. UNCERTAINTY AND SENSITIVITY ANALYSES FOR INTEGRATED HUMAN HEALTH AND ECOLOGICAL RISK ASSESSMENT OF HAZARDOUS WASTE DISPOSAL

    EPA Science Inventory

    While there is a high potential for exposure of humans and ecosystems to chemicals released from hazardous waste sites, the degree to which this potential is realized is often uncertain. Conceptually divided among parameter, model, and modeler uncertainties imparted during simula...

  2. EVALUATION AND SENSITIVITY ANALYSES RESULTS OF THE MESOPUFF II MODEL WITH CAPTEX MEASUREMENTS

    EPA Science Inventory

    The MESOPUFF II regional Lagrangian puff model has been evaluated and tested against measurements from the Cross-Appalachian Tracer Experiment (CAPTEX) data base in an effort to assess its abilIty to simulate the transport and dispersion of a nonreactive, nondepositing tracer plu...

  3. Sensitivity analysis of reactive ecological dynamics.

    PubMed

    Verdy, Ariane; Caswell, Hal

    2008-08-01

    Ecological systems with asymptotically stable equilibria may exhibit significant transient dynamics following perturbations. In some cases, these transient dynamics include the possibility of excursions away from the equilibrium before the eventual return; systems that exhibit such amplification of perturbations are called reactive. Reactivity is a common property of ecological systems, and the amplification can be large and long-lasting. The transient response of a reactive ecosystem depends on the parameters of the underlying model. To investigate this dependence, we develop sensitivity analyses for indices of transient dynamics (reactivity, the amplification envelope, and the optimal perturbation) in both continuous- and discrete-time models written in matrix form. The sensitivity calculations require expressions, some of them new, for the derivatives of equilibria, eigenvalues, singular values, and singular vectors, obtained using matrix calculus. Sensitivity analysis provides a quantitative framework for investigating the mechanisms leading to transient growth. We apply the methodology to a predator-prey model and a size-structured food web model. The results suggest predator-driven and prey-driven mechanisms for transient amplification resulting from multispecies interactions.

  4. A conflict model for the international hazardous waste disposal dispute.

    PubMed

    Hu, Kaixian; Hipel, Keith W; Fang, Liping

    2009-12-15

    A multi-stage conflict model is developed to analyze international hazardous waste disposal disputes. More specifically, the ongoing toxic waste conflicts are divided into two stages consisting of the dumping prevention and dispute resolution stages. The modeling and analyses, based on the methodology of graph model for conflict resolution (GMCR), are used in both stages in order to grasp the structure and implications of a given conflict from a strategic viewpoint. Furthermore, a specific case study is investigated for the Ivory Coast hazardous waste conflict. In addition to the stability analysis, sensitivity and attitude analyses are conducted to capture various strategic features of this type of complicated dispute.

  5. Sensitivity analysis of conservative and reactive stream transient storage models applied to field data from multiple-reach experiments

    USGS Publications Warehouse

    Gooseff, M.N.; Bencala, K.E.; Scott, D.T.; Runkel, R.L.; McKnight, Diane M.

    2005-01-01

    The transient storage model (TSM) has been widely used in studies of stream solute transport and fate, with an increasing emphasis on reactive solute transport. In this study we perform sensitivity analyses of a conservative TSM and two different reactive solute transport models (RSTM), one that includes first-order decay in the stream and the storage zone, and a second that considers sorption of a reactive solute on streambed sediments. Two previously analyzed data sets are examined with a focus on the reliability of these RSTMs in characterizing stream and storage zone solute reactions. Sensitivities of simulations to parameters within and among reaches, parameter coefficients of variation, and correlation coefficients are computed and analyzed. Our results indicate that (1) simulated values have the greatest sensitivity to parameters within the same reach, (2) simulated values are also sensitive to parameters in reaches immediately upstream and downstream (inter-reach sensitivity), (3) simulated values have decreasing sensitivity to parameters in reaches farther downstream, and (4) in-stream reactive solute data provide adequate data to resolve effective storage zone reaction parameters, given the model formulations. Simulations of reactive solutes are shown to be equally sensitive to transport parameters and effective reaction parameters of the model, evidence of the control of physical transport on reactive solute dynamics. Similar to conservative transport analysis, reactive solute simulations appear to be most sensitive to data collected during the rising and falling limb of the concentration breakthrough curve. ?? 2005 Elsevier Ltd. All rights reserved.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hatakeyama, Hiroto; Wu, Sherry Y.; Lyons, Yasmin A.

    Even though hyperthermia is a promising treatment for cancer, the relationship between specific temperatures and clinical benefits and predictors of sensitivity of cancer to hyperthermia is poorly understood. Ovarian and uterine tumors have diverse hyperthermia sensitivities. Integrative analyses of the specific gene signatures and the differences in response to hyperthermia between hyperthermia-sensitive and -resistant cancer cells identified CTGF as a key regulator of sensitivity. CTGF silencing sensitized resistant cells to hyperthermia. CTGF small interfering RNA (siRNA) treatment also sensitized resistant cancers to localized hyperthermia induced by copper sulfide nanoparticles and near-infrared laser in orthotopic ovarian cancer models. Lastly, CTGF silencingmore » aggravated energy stress induced by hyperthermia and enhanced apoptosis of hyperthermia-resistant cancers.« less

  7. Global sensitivity analysis for fuzzy inputs based on the decomposition of fuzzy output entropy

    NASA Astrophysics Data System (ADS)

    Shi, Yan; Lu, Zhenzhou; Zhou, Yicheng

    2018-06-01

    To analyse the component of fuzzy output entropy, a decomposition method of fuzzy output entropy is first presented. After the decomposition of fuzzy output entropy, the total fuzzy output entropy can be expressed as the sum of the component fuzzy entropy contributed by fuzzy inputs. Based on the decomposition of fuzzy output entropy, a new global sensitivity analysis model is established for measuring the effects of uncertainties of fuzzy inputs on the output. The global sensitivity analysis model can not only tell the importance of fuzzy inputs but also simultaneously reflect the structural composition of the response function to a certain degree. Several examples illustrate the validity of the proposed global sensitivity analysis, which is a significant reference in engineering design and optimization of structural systems.

  8. Computational aspects of sensitivity calculations in linear transient structural analysis. Ph.D. Thesis - Virginia Polytechnic Inst. and State Univ.

    NASA Technical Reports Server (NTRS)

    Greene, William H.

    1990-01-01

    A study was performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal of the study was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semi-analytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. In several cases this fixed mode approach resulted in very poor approximations of the stress sensitivities. Almost all of the original modes were required for an accurate sensitivity and for small numbers of modes, the accuracy was extremely poor. To overcome this poor accuracy, two semi-analytical techniques were developed. The first technique accounts for the change in eigenvectors through approximate eigenvector derivatives. The second technique applies the mode acceleration method of transient analysis to the sensitivity calculations. Both result in accurate values of the stress sensitivities with a small number of modes and much lower computational costs than if the vibration modes were recalculated and then used in an overall finite difference method.

  9. Cost Effectiveness of Ofatumumab Plus Chlorambucil in First-Line Chronic Lymphocytic Leukaemia in Canada.

    PubMed

    Herring, William; Pearson, Isobel; Purser, Molly; Nakhaipour, Hamid Reza; Haiderali, Amin; Wolowacz, Sorrel; Jayasundara, Kavisha

    2016-01-01

    Our objective was to estimate the cost effectiveness of ofatumumab plus chlorambucil (OChl) versus chlorambucil in patients with chronic lymphocytic leukaemia for whom fludarabine-based therapies are considered inappropriate from the perspective of the publicly funded healthcare system in Canada. A semi-Markov model (3-month cycle length) used survival curves to govern progression-free survival (PFS) and overall survival (OS). Efficacy and safety data and health-state utility values were estimated from the COMPLEMENT-1 trial. Post-progression treatment patterns were based on clinical guidelines, Canadian treatment practices and published literature. Total and incremental expected lifetime costs (in Canadian dollars [$Can], year 2013 values), life-years and quality-adjusted life-years (QALYs) were computed. Uncertainty was assessed via deterministic and probabilistic sensitivity analyses. The discounted lifetime health and economic outcomes estimated by the model showed that, compared with chlorambucil, first-line treatment with OChl led to an increase in QALYs (0.41) and total costs ($Can27,866) and to an incremental cost-effectiveness ratio (ICER) of $Can68,647 per QALY gained. In deterministic sensitivity analyses, the ICER was most sensitive to the modelling time horizon and to the extrapolation of OS treatment effects beyond the trial duration. In probabilistic sensitivity analysis, the probability of cost effectiveness at a willingness-to-pay threshold of $Can100,000 per QALY gained was 59 %. Base-case results indicated that improved overall response and PFS for OChl compared with chlorambucil translated to improved quality-adjusted life expectancy. Sensitivity analysis suggested that OChl is likely to be cost effective subject to uncertainty associated with the presence of any long-term OS benefit and the model time horizon.

  10. How often do sensitivity analyses for economic parameters change cost-utility analysis conclusions?

    PubMed

    Schackman, Bruce R; Gold, Heather Taffet; Stone, Patricia W; Neumann, Peter J

    2004-01-01

    There is limited evidence about the extent to which sensitivity analysis has been used in the cost-effectiveness literature. Sensitivity analyses for health-related QOL (HR-QOL), cost and discount rate economic parameters are of particular interest because they measure the effects of methodological and estimation uncertainties. To investigate the use of sensitivity analyses in the pharmaceutical cost-utility literature in order to test whether a change in economic parameters could result in a different conclusion regarding the cost effectiveness of the intervention analysed. Cost-utility analyses of pharmaceuticals identified in a prior comprehensive audit (70 articles) were reviewed and further audited. For each base case for which sensitivity analyses were reported (n = 122), up to two sensitivity analyses for HR-QOL (n = 133), cost (n = 99), and discount rate (n = 128) were examined. Article mentions of thresholds for acceptable cost-utility ratios were recorded (total 36). Cost-utility ratios were denominated in US dollars for the year reported in each of the original articles in order to determine whether a different conclusion would have been indicated at the time the article was published. Quality ratings from the original audit for articles where sensitivity analysis results crossed the cost-utility ratio threshold above the base-case result were compared with those that did not. The most frequently mentioned cost-utility thresholds were $US20,000/QALY, $US50,000/QALY, and $US100,000/QALY. The proportions of sensitivity analyses reporting quantitative results that crossed the threshold above the base-case results (or where the sensitivity analysis result was dominated) were 31% for HR-QOL sensitivity analyses, 20% for cost-sensitivity analyses, and 15% for discount-rate sensitivity analyses. Almost half of the discount-rate sensitivity analyses did not report quantitative results. Articles that reported sensitivity analyses where results crossed the cost-utility threshold above the base-case results (n = 25) were of somewhat higher quality, and were more likely to justify their sensitivity analysis parameters, than those that did not (n = 45), but the overall quality rating was only moderate. Sensitivity analyses for economic parameters are widely reported and often identify whether choosing different assumptions leads to a different conclusion regarding cost effectiveness. Changes in HR-QOL and cost parameters should be used to test alternative guideline recommendations when there is uncertainty regarding these parameters. Changes in discount rates less frequently produce results that would change the conclusion about cost effectiveness. Improving the overall quality of published studies and describing the justifications for parameter ranges would allow more meaningful conclusions to be drawn from sensitivity analyses.

  11. Use of multi-criteria decision analysis to identify potentially dangerous glacial lakes.

    PubMed

    Kougkoulos, Ioannis; Cook, Simon J; Jomelli, Vincent; Clarke, Leon; Symeonakis, Elias; Dortch, Jason M; Edwards, Laura A; Merad, Myriam

    2018-04-15

    Glacial Lake Outburst Floods (GLOFs) represent a significant threat in deglaciating environments, necessitating the development of GLOF hazard and risk assessment procedures. Here, we outline a Multi-Criteria Decision Analysis (MCDA) approach that can be used to rapidly identify potentially dangerous lakes in regions without existing tailored GLOF risk assessments, where a range of glacial lake types exist, and where field data are sparse or non-existent. Our MCDA model (1) is desk-based and uses freely and widely available data inputs and software, and (2) allows the relative risk posed by a range of glacial lake types to be assessed simultaneously within any region. A review of the factors that influence GLOF risk, combined with the strict rules of criteria selection inherent to MCDA, has allowed us to identify 13 exhaustive, non-redundant, and consistent risk criteria. We use our MCDA model to assess the risk of 16 extant glacial lakes and 6 lakes that have already generated GLOFs, and found that our results agree well with previous studies. For the first time in GLOF risk assessment, we employed sensitivity analyses to test the strength of our model results and assumptions, and to identify lakes that are sensitive to the criteria and risk thresholds used. A key benefit of the MCDA method is that sensitivity analyses are readily undertaken. Overall, these sensitivity analyses lend support to our model, although we suggest that further work is required to determine the relative importance of assessment criteria, and the thresholds that determine the level of risk for each criterion. As a case study, the tested method was then applied to 25 potentially dangerous lakes in the Bolivian Andes, where GLOF risk is poorly understood; 3 lakes are found to pose 'medium' or 'high' risk, and require further detailed investigation. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. The Impact of Rainfall on Soil Moisture Dynamics in a Foggy Desert.

    PubMed

    Li, Bonan; Wang, Lixin; Kaseke, Kudzai F; Li, Lin; Seely, Mary K

    2016-01-01

    Soil moisture is a key variable in dryland ecosystems since it determines the occurrence and duration of vegetation water stress and affects the development of weather patterns including rainfall. However, the lack of ground observations of soil moisture and rainfall dynamics in many drylands has long been a major obstacle in understanding ecohydrological processes in these ecosystems. It is also uncertain to what extent rainfall controls soil moisture dynamics in fog dominated dryland systems. To this end, in this study, twelve to nineteen months' continuous daily records of rainfall and soil moisture (from January 2014 to August 2015) obtained from three sites (one sand dune site and two gravel plain sites) in the Namib Desert are reported. A process-based model simulating the stochastic soil moisture dynamics in water-limited systems was used to study the relationships between soil moisture and rainfall dynamics. Model sensitivity in response to different soil and vegetation parameters under diverse soil textures was also investigated. Our field observations showed that surface soil moisture dynamics generally follow rainfall patterns at the two gravel plain sites, whereas soil moisture dynamics in the sand dune site did not show a significant relationship with rainfall pattern. The modeling results suggested that most of the soil moisture dynamics can be simulated except the daily fluctuations, which may require a modification of the model structure to include non-rainfall components. Sensitivity analyses suggested that soil hygroscopic point (sh) and field capacity (sfc) were two main parameters controlling soil moisture output, though permanent wilting point (sw) was also very sensitive under the parameter setting of sand dune (Gobabeb) and gravel plain (Kleinberg). Overall, the modeling results were not sensitive to the parameters in non-bounded group (e.g., soil hydraulic conductivity (Ks) and soil porosity (n)). Field observations, stochastic modeling results as well as sensitivity analyses provide soil moisture baseline information for future monitoring and the prediction of soil moisture patterns in the Namib Desert.

  13. The Impact of Rainfall on Soil Moisture Dynamics in a Foggy Desert

    PubMed Central

    Li, Bonan; Wang, Lixin; Kaseke, Kudzai F.; Li, Lin; Seely, Mary K.

    2016-01-01

    Soil moisture is a key variable in dryland ecosystems since it determines the occurrence and duration of vegetation water stress and affects the development of weather patterns including rainfall. However, the lack of ground observations of soil moisture and rainfall dynamics in many drylands has long been a major obstacle in understanding ecohydrological processes in these ecosystems. It is also uncertain to what extent rainfall controls soil moisture dynamics in fog dominated dryland systems. To this end, in this study, twelve to nineteen months’ continuous daily records of rainfall and soil moisture (from January 2014 to August 2015) obtained from three sites (one sand dune site and two gravel plain sites) in the Namib Desert are reported. A process-based model simulating the stochastic soil moisture dynamics in water-limited systems was used to study the relationships between soil moisture and rainfall dynamics. Model sensitivity in response to different soil and vegetation parameters under diverse soil textures was also investigated. Our field observations showed that surface soil moisture dynamics generally follow rainfall patterns at the two gravel plain sites, whereas soil moisture dynamics in the sand dune site did not show a significant relationship with rainfall pattern. The modeling results suggested that most of the soil moisture dynamics can be simulated except the daily fluctuations, which may require a modification of the model structure to include non-rainfall components. Sensitivity analyses suggested that soil hygroscopic point (sh) and field capacity (sfc) were two main parameters controlling soil moisture output, though permanent wilting point (sw) was also very sensitive under the parameter setting of sand dune (Gobabeb) and gravel plain (Kleinberg). Overall, the modeling results were not sensitive to the parameters in non-bounded group (e.g., soil hydraulic conductivity (Ks) and soil porosity (n)). Field observations, stochastic modeling results as well as sensitivity analyses provide soil moisture baseline information for future monitoring and the prediction of soil moisture patterns in the Namib Desert. PMID:27764203

  14. Role of CTGF in sensitivity to hyperthermia in ovarian and uterine cancers

    DOE PAGES

    Hatakeyama, Hiroto; Wu, Sherry Y.; Lyons, Yasmin A.; ...

    2016-11-01

    Even though hyperthermia is a promising treatment for cancer, the relationship between specific temperatures and clinical benefits and predictors of sensitivity of cancer to hyperthermia is poorly understood. Ovarian and uterine tumors have diverse hyperthermia sensitivities. Integrative analyses of the specific gene signatures and the differences in response to hyperthermia between hyperthermia-sensitive and -resistant cancer cells identified CTGF as a key regulator of sensitivity. CTGF silencing sensitized resistant cells to hyperthermia. CTGF small interfering RNA (siRNA) treatment also sensitized resistant cancers to localized hyperthermia induced by copper sulfide nanoparticles and near-infrared laser in orthotopic ovarian cancer models. Lastly, CTGF silencingmore » aggravated energy stress induced by hyperthermia and enhanced apoptosis of hyperthermia-resistant cancers.« less

  15. Modeling the atmospheric chemistry of TICs

    NASA Astrophysics Data System (ADS)

    Henley, Michael V.; Burns, Douglas S.; Chynwat, Veeradej; Moore, William; Plitz, Angela; Rottmann, Shawn; Hearn, John

    2009-05-01

    An atmospheric chemistry model that describes the behavior and disposition of environmentally hazardous compounds discharged into the atmosphere was coupled with the transport and diffusion model, SCIPUFF. The atmospheric chemistry model was developed by reducing a detailed atmospheric chemistry mechanism to a simple empirical effective degradation rate term (keff) that is a function of important meteorological parameters such as solar flux, temperature, and cloud cover. Empirically derived keff functions that describe the degradation of target toxic industrial chemicals (TICs) were derived by statistically analyzing data generated from the detailed chemistry mechanism run over a wide range of (typical) atmospheric conditions. To assess and identify areas to improve the developed atmospheric chemistry model, sensitivity and uncertainty analyses were performed to (1) quantify the sensitivity of the model output (TIC concentrations) with respect to changes in the input parameters and (2) improve, where necessary, the quality of the input data based on sensitivity results. The model predictions were evaluated against experimental data. Chamber data were used to remove the complexities of dispersion in the atmosphere.

  16. CO2 Push-Pull Dual (Conjugate) Faults Injection Simulations

    DOE Data Explorer

    Oldenburg, Curtis (ORCID:0000000201326016); Lee, Kyung Jae; Doughty, Christine; Jung, Yoojin; Borgia, Andrea; Pan, Lehua; Zhang, Rui; Daley, Thomas M.; Altundas, Bilgin; Chugunov, Nikita

    2017-07-20

    This submission contains datasets and a final manuscript associated with a project simulating carbon dioxide push-pull into a conjugate fault system modeled after Dixie Valley- sensitivity analysis of significant parameters and uncertainty prediction by data-worth analysis. Datasets include: (1) Forward simulation runs of standard cases (push & pull phases), (2) Local sensitivity analyses (push & pull phases), and (3) Data-worth analysis (push & pull phases).

  17. Cognitive and Neural Bases of Skilled Performance.

    DTIC Science & Technology

    1987-10-04

    advantage is that this method is not computationally demanding, and model -specific analyses such as high -precision source localization with realistic...and a two- < " high -threshold model satisfy theoretical and pragmatic independence. Discrimination and bias measures from these two models comparing...recognition memory of patients with dementing diseases, amnesics, and normal controls. We found the two- high -threshold model to be more sensitive Lloyd

  18. Sensitivity analysis in economic evaluation: an audit of NICE current practice and a review of its use and value in decision-making.

    PubMed

    Andronis, L; Barton, P; Bryan, S

    2009-06-01

    To determine how we define good practice in sensitivity analysis in general and probabilistic sensitivity analysis (PSA) in particular, and to what extent it has been adhered to in the independent economic evaluations undertaken for the National Institute for Health and Clinical Excellence (NICE) over recent years; to establish what policy impact sensitivity analysis has in the context of NICE, and policy-makers' views on sensitivity analysis and uncertainty, and what use is made of sensitivity analysis in policy decision-making. Three major electronic databases, MEDLINE, EMBASE and the NHS Economic Evaluation Database, were searched from inception to February 2008. The meaning of 'good practice' in the broad area of sensitivity analysis was explored through a review of the literature. An audit was undertaken of the 15 most recent NICE multiple technology appraisal judgements and their related reports to assess how sensitivity analysis has been undertaken by independent academic teams for NICE. A review of the policy and guidance documents issued by NICE aimed to assess the policy impact of the sensitivity analysis and the PSA in particular. Qualitative interview data from NICE Technology Appraisal Committee members, collected as part of an earlier study, were also analysed to assess the value attached to the sensitivity analysis components of the economic analyses conducted for NICE. All forms of sensitivity analysis, notably both deterministic and probabilistic approaches, have their supporters and their detractors. Practice in relation to univariate sensitivity analysis is highly variable, with considerable lack of clarity in relation to the methods used and the basis of the ranges employed. In relation to PSA, there is a high level of variability in the form of distribution used for similar parameters, and the justification for such choices is rarely given. Virtually all analyses failed to consider correlations within the PSA, and this is an area of concern. Uncertainty is considered explicitly in the process of arriving at a decision by the NICE Technology Appraisal Committee, and a correlation between high levels of uncertainty and negative decisions was indicated. The findings suggest considerable value in deterministic sensitivity analysis. Such analyses serve to highlight which model parameters are critical to driving a decision. Strong support was expressed for PSA, principally because it provides an indication of the parameter uncertainty around the incremental cost-effectiveness ratio. The review and the policy impact assessment focused exclusively on documentary evidence, excluding other sources that might have revealed further insights on this issue. In seeking to address parameter uncertainty, both deterministic and probabilistic sensitivity analyses should be used. It is evident that some cost-effectiveness work, especially around the sensitivity analysis components, represents a challenge in making it accessible to those making decisions. This speaks to the training agenda for those sitting on such decision-making bodies, and to the importance of clear presentation of analyses by the academic community.

  19. Shortwave forcing and feedbacks in Last Glacial Maximum and Mid-Holocene PMIP3 simulations.

    PubMed

    Braconnot, Pascale; Kageyama, Masa

    2015-11-13

    Simulations of the climates of the Last Glacial Maximum (LGM), 21 000 years ago, and of the Mid-Holocene (MH), 6000 years ago, allow an analysis of climate feedbacks in climate states that are radically different from today. The analyses of cloud and surface albedo feedbacks show that the shortwave cloud feedback is a major driver of differences between model results. Similar behaviours appear when comparing the LGM and MH simulated changes, highlighting the fingerprint of model physics. Even though the different feedbacks show similarities between the different climate periods, the fact that their relative strength differs from one climate to the other prevents a direct comparison of past and future climate sensitivity. The land-surface feedback also shows large disparities among models even though they all produce positive sea-ice and snow feedbacks. Models have very different sensitivities when considering the vegetation feedback. This feedback has a regional pattern that differs significantly between models and depends on their level of complexity and model biases. Analyses of the MH climate in two versions of the IPSL model provide further indication on the possibilities to assess the role of model biases and model physics on simulated climate changes using past climates for which observations can be used to assess the model results. © 2015 The Author(s).

  20. Climate sensitivity to the lower stratospheric ozone variations

    NASA Astrophysics Data System (ADS)

    Kilifarska, N. A.

    2012-12-01

    The strong sensitivity of the Earth's radiation balance to variations in the lower stratospheric ozone—reported previously—is analysed here by the use of non-linear statistical methods. Our non-linear model of the land air temperature (T)—driven by the measured Arosa total ozone (TOZ)—explains 75% of total variability of Earth's T variations during the period 1926-2011. We have analysed also the factors which could influence the TOZ variability and found that the strongest impact belongs to the multi-decadal variations of galactic cosmic rays. Constructing a statistical model of the ozone variability, we have been able to predict the tendency in the land air T evolution till the end of the current decade. Results show that Earth is facing a weak cooling of the surface T by 0.05-0.25 K (depending on the ozone model) until the end of the current solar cycle. A new mechanism for O3 influence on climate is proposed.

  1. Modelling of intermittent microwave convective drying: parameter sensitivity

    NASA Astrophysics Data System (ADS)

    Zhang, Zhijun; Qin, Wenchao; Shi, Bin; Gao, Jingxin; Zhang, Shiwei

    2017-06-01

    The reliability of the predictions of a mathematical model is a prerequisite to its utilization. A multiphase porous media model of intermittent microwave convective drying is developed based on the literature. The model considers the liquid water, gas and solid matrix inside of food. The model is simulated by COMSOL software. Its sensitivity parameter is analysed by changing the parameter values by ±20%, with the exception of several parameters. The sensitivity analysis of the process of the microwave power level shows that each parameter: ambient temperature, effective gas diffusivity, and evaporation rate constant, has significant effects on the process. However, the surface mass, heat transfer coefficient, relative and intrinsic permeability of the gas, and capillary diffusivity of water do not have a considerable effect. The evaporation rate constant has minimal parameter sensitivity with a ±20% value change, until it is changed 10-fold. In all results, the temperature and vapour pressure curves show the same trends as the moisture content curve. However, the water saturation at the medium surface and in the centre show different results. Vapour transfer is the major mass transfer phenomenon that affects the drying process.

  2. Behavioral metabolomics analysis identifies novel neurochemical signatures in methamphetamine sensitization

    PubMed Central

    Adkins, Daniel E.; McClay, Joseph L.; Vunck, Sarah A.; Batman, Angela M.; Vann, Robert E.; Clark, Shaunna L.; Souza, Renan P.; Crowley, James J.; Sullivan, Patrick F.; van den Oord, Edwin J.C.G.; Beardsley, Patrick M.

    2014-01-01

    Behavioral sensitization has been widely studied in animal models and is theorized to reflect neural modifications associated with human psychostimulant addiction. While the mesolimbic dopaminergic pathway is known to play a role, the neurochemical mechanisms underlying behavioral sensitization remain incompletely understood. In the present study, we conducted the first metabolomics analysis to globally characterize neurochemical differences associated with behavioral sensitization. Methamphetamine-induced sensitization measures were generated by statistically modeling longitudinal activity data for eight inbred strains of mice. Subsequent to behavioral testing, nontargeted liquid and gas chromatography-mass spectrometry profiling was performed on 48 brain samples, yielding 301 metabolite levels per sample after quality control. Association testing between metabolite levels and three primary dimensions of behavioral sensitization (total distance, stereotypy and margin time) showed four robust, significant associations at a stringent metabolome-wide significance threshold (false discovery rate < 0.05). Results implicated homocarnosine, a dipeptide of GABA and histidine, in total distance sensitization, GABA metabolite 4-guanidinobutanoate and pantothenate in stereotypy sensitization, and myo-inositol in margin time sensitization. Secondary analyses indicated that these associations were independent of concurrent methamphetamine levels and, with the exception of the myo-inositol association, suggest a mechanism whereby strain-based genetic variation produces specific baseline neurochemical differences that substantially influence the magnitude of MA-induced sensitization. These findings demonstrate the utility of mouse metabolomics for identifying novel biomarkers, and developing more comprehensive neurochemical models, of psychostimulant sensitization. PMID:24034544

  3. Modeling Antimicrobial Activity of Clorox(R) Using an Agar-Diffusion Test: A New Twist On an Old Experiment.

    ERIC Educational Resources Information Center

    Mitchell, James K.; Carter, William E.

    2000-01-01

    Describes using a computer statistical software package called Minitab to model the sensitivity of several microbes to the disinfectant NaOCl (Clorox') using the Kirby-Bauer technique. Each group of students collects data from one microbe, conducts regression analyses, then chooses the best-fit model based on the highest r-values obtained.…

  4. Uncertainty, Sensitivity Analysis, and Causal Identification in the Arctic using a Perturbed Parameter Ensemble of the HiLAT Climate Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunke, Elizabeth Clare; Urrego Blanco, Jorge Rolando; Urban, Nathan Mark

    Coupled climate models have a large number of input parameters that can affect output uncertainty. We conducted a sensitivity analysis of sea ice proper:es and Arc:c related climate variables to 5 parameters in the HiLAT climate model: air-ocean turbulent exchange parameter (C), conversion of water vapor to clouds (cldfrc_rhminl) and of ice crystals to snow (micro_mg_dcs), snow thermal conduc:vity (ksno), and maximum snow grain size (rsnw_mlt). We used an elementary effect (EE) approach to rank their importance for output uncertainty. EE is an extension of one-at-a-time sensitivity analyses, but it is more efficient in sampling multi-dimensional parameter spaces. We lookedmore » for emerging relationships among climate variables across the model ensemble, and used causal discovery algorithms to establish potential pathways for those relationships.« less

  5. Panic and phobic anxiety: associations among neuroticism, physiological hyperarousal, anxiety sensitivity, and three phobias.

    PubMed

    Longley, Susan L; Watson, David; Noyes, Russell; Yoder, Kevin

    2006-01-01

    A dimensional and psychometrically informed taxonomy of anxiety is emerging, but the specific and nonspecific dimensions of panic and phobic anxiety require greater clarification. In this study, confirmatory factor analyses of data from a sample of 438 college students were used to validate a model of panic and phobic anxiety with six content factors; multiple scales from self-report measures were indicators of each model component. The model included a nonspecific component of (1) neuroticism and two specific components of panic attack, (2) physiological hyperarousal, and (3) anxiety sensitivity. The model also included three phobia components of (4) classically defined agoraphobia, (5) social phobia, and (6) blood-injection phobia. In these data, agoraphobia correlated more strongly with both the social phobia and blood phobia components than with either the physiological hyperarousal or the anxiety sensitivity components. These findings suggest that the association between panic attacks and agoraphobia warrants greater attention.

  6. A dynamic growth model of vegetative soya bean plants: model structure and behaviour under varying root temperature and nitrogen concentration

    NASA Technical Reports Server (NTRS)

    Lim, J. T.; Wilkerson, G. G.; Raper, C. D. Jr; Gold, H. J.

    1990-01-01

    A differential equation model of vegetative growth of the soya bean plant (Glycine max (L.) Merrill cv. Ransom') was developed to account for plant growth in a phytotron system under variation of root temperature and nitrogen concentration in nutrient solution. The model was tested by comparing model outputs with data from four different experiments. Model predictions agreed fairly well with measured plant performance over a wide range of root temperatures and over a range of nitrogen concentrations in nutrient solution between 0.5 and 10.0 mmol NO3- in the phytotron environment. Sensitivity analyses revealed that the model was most sensitive to changes in parameters relating to carbohydrate concentration in the plant and nitrogen uptake rate.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Yongxi

    We propose an integrated modeling framework to optimally locate wireless charging facilities along a highway corridor to provide sufficient in-motion charging. The integrated model consists of a master, Infrastructure Planning Model that determines best locations with integrated two sub-models that explicitly capture energy consumption and charging and the interactions between electric vehicle and wireless charging technologies, geometrics of highway corridors, speed, and auxiliary system. The model is implemented in an illustrative case study of a highway corridor of Interstate 5 in Oregon. We found that the cost of establishing the charging lane is sensitive and increases with the speed tomore » achieve. Through sensitivity analyses, we gain better understanding on the extent of impacts of geometric characteristics of highways and battery capacity on the charging lane design.« less

  8. Photochemical modeling and analysis of meteorological parameters during ozone episodes in Kaohsiung, Taiwan

    NASA Astrophysics Data System (ADS)

    Chen, K. S.; Ho, Y. T.; Lai, C. H.; Chou, Youn-Min

    The events of high ozone concentrations and meteorological conditions covering the Kaohsiung metropolitan area were investigated based on data analysis and model simulation. A photochemical grid model was employed to analyze two ozone episodes in autumn (2000) and winter (2001) seasons, each covering three consecutive days (or 72 h) in the Kaohsiung City. The potential influence of the initial and boundary conditions on model performance was assessed. Model performance can be improved by separately considering the daytime and nighttime ozone concentrations on the lateral boundary conditions of the model domain. The sensitivity analyses of ozone concentrations to the emission reductions in volatile organic compounds (VOC) and nitrogen oxides (NO x) show a VOC-sensitive regime for emission reductions to lower than 30-40% VOC and 30-50% NO x and a NO x-sensitive regime for larger percentage reductions. Meteorological parameters show that warm temperature, sufficient sunlight, low wind, and high surface pressure are distinct parameters that tend to trigger ozone episodes in polluted urban areas, like Kaohsiung.

  9. Accounting for Heterogeneity in Relative Treatment Effects for Use in Cost-Effectiveness Models and Value-of-Information Analyses

    PubMed Central

    Soares, Marta O.; Palmer, Stephen; Ades, Anthony E.; Harrison, David; Shankar-Hari, Manu; Rowan, Kathy M.

    2015-01-01

    Cost-effectiveness analysis (CEA) models are routinely used to inform health care policy. Key model inputs include relative effectiveness of competing treatments, typically informed by meta-analysis. Heterogeneity is ubiquitous in meta-analysis, and random effects models are usually used when there is variability in effects across studies. In the absence of observed treatment effect modifiers, various summaries from the random effects distribution (random effects mean, predictive distribution, random effects distribution, or study-specific estimate [shrunken or independent of other studies]) can be used depending on the relationship between the setting for the decision (population characteristics, treatment definitions, and other contextual factors) and the included studies. If covariates have been measured that could potentially explain the heterogeneity, then these can be included in a meta-regression model. We describe how covariates can be included in a network meta-analysis model and how the output from such an analysis can be used in a CEA model. We outline a model selection procedure to help choose between competing models and stress the importance of clinical input. We illustrate the approach with a health technology assessment of intravenous immunoglobulin for the management of adult patients with severe sepsis in an intensive care setting, which exemplifies how risk of bias information can be incorporated into CEA models. We show that the results of the CEA and value-of-information analyses are sensitive to the model and highlight the importance of sensitivity analyses when conducting CEA in the presence of heterogeneity. The methods presented extend naturally to heterogeneity in other model inputs, such as baseline risk. PMID:25712447

  10. Accounting for Heterogeneity in Relative Treatment Effects for Use in Cost-Effectiveness Models and Value-of-Information Analyses.

    PubMed

    Welton, Nicky J; Soares, Marta O; Palmer, Stephen; Ades, Anthony E; Harrison, David; Shankar-Hari, Manu; Rowan, Kathy M

    2015-07-01

    Cost-effectiveness analysis (CEA) models are routinely used to inform health care policy. Key model inputs include relative effectiveness of competing treatments, typically informed by meta-analysis. Heterogeneity is ubiquitous in meta-analysis, and random effects models are usually used when there is variability in effects across studies. In the absence of observed treatment effect modifiers, various summaries from the random effects distribution (random effects mean, predictive distribution, random effects distribution, or study-specific estimate [shrunken or independent of other studies]) can be used depending on the relationship between the setting for the decision (population characteristics, treatment definitions, and other contextual factors) and the included studies. If covariates have been measured that could potentially explain the heterogeneity, then these can be included in a meta-regression model. We describe how covariates can be included in a network meta-analysis model and how the output from such an analysis can be used in a CEA model. We outline a model selection procedure to help choose between competing models and stress the importance of clinical input. We illustrate the approach with a health technology assessment of intravenous immunoglobulin for the management of adult patients with severe sepsis in an intensive care setting, which exemplifies how risk of bias information can be incorporated into CEA models. We show that the results of the CEA and value-of-information analyses are sensitive to the model and highlight the importance of sensitivity analyses when conducting CEA in the presence of heterogeneity. The methods presented extend naturally to heterogeneity in other model inputs, such as baseline risk. © The Author(s) 2015.

  11. How rapidly does the excess risk of lung cancer decline following quitting smoking? A quantitative review using the negative exponential model.

    PubMed

    Fry, John S; Lee, Peter N; Forey, Barbara A; Coombs, Katharine J

    2013-10-01

    The excess lung cancer risk from smoking declines with time quit, but the shape of the decline has never been precisely modelled, or meta-analyzed. From a database of studies of at least 100 cases, we extracted 106 blocks of RRs (from 85 studies) comparing current smokers, former smokers (by time quit) and never smokers. Corresponding pseudo-numbers of cases and controls (or at-risk) formed the data for fitting the negative exponential model. We estimated the half-life (H, time in years when the excess risk becomes half that for a continuing smoker) for each block, investigated model fit, and studied heterogeneity in H. We also conducted sensitivity analyses allowing for reverse causation, either ignoring short-term quitters (S1) or considering them smokers (S2). Model fit was poor ignoring reverse causation, but much improved for both sensitivity analyses. Estimates of H were similar for all three analyses. For the best-fitting analysis (S1), H was 9.93 (95% CI 9.31-10.60), but varied by sex (females 7.92, males 10.71), and age (<50years 6.98, 70+years 12.99). Given that reverse causation is taken account of, the model adequately describes the decline in excess risk. However, estimates of H may be biased by factors including misclassification of smoking status. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  12. Automated haematology analysis to diagnose malaria

    PubMed Central

    2010-01-01

    For more than a decade, flow cytometry-based automated haematology analysers have been studied for malaria diagnosis. Although current haematology analysers are not specifically designed to detect malaria-related abnormalities, most studies have found sensitivities that comply with WHO malaria-diagnostic guidelines, i.e. ≥ 95% in samples with > 100 parasites/μl. Establishing a correct and early malaria diagnosis is a prerequisite for an adequate treatment and to minimizing adverse outcomes. Expert light microscopy remains the 'gold standard' for malaria diagnosis in most clinical settings. However, it requires an explicit request from clinicians and has variable accuracy. Malaria diagnosis with flow cytometry-based haematology analysers could become an important adjuvant diagnostic tool in the routine laboratory work-up of febrile patients in or returning from malaria-endemic regions. Haematology analysers so far studied for malaria diagnosis are the Cell-Dyn®, Coulter® GEN·S and LH 750, and the Sysmex XE-2100® analysers. For Cell-Dyn analysers, abnormal depolarization events mainly in the lobularity/granularity and other scatter-plots, and various reticulocyte abnormalities have shown overall sensitivities and specificities of 49% to 97% and 61% to 100%, respectively. For the Coulter analysers, a 'malaria factor' using the monocyte and lymphocyte size standard deviations obtained by impedance detection has shown overall sensitivities and specificities of 82% to 98% and 72% to 94%, respectively. For the XE-2100, abnormal patterns in the DIFF, WBC/BASO, and RET-EXT scatter-plots, and pseudoeosinophilia and other abnormal haematological variables have been described, and multivariate diagnostic models have been designed with overall sensitivities and specificities of 86% to 97% and 81% to 98%, respectively. The accuracy for malaria diagnosis may vary according to species, parasite load, immunity and clinical context where the method is applied. Future developments in new haematology analysers such as considerably simplified, robust and inexpensive devices for malaria detection fitted with an automatically generated alert could improve the detection capacity of these instruments and potentially expand their clinical utility in malaria diagnosis. PMID:21118557

  13. "A Bayesian sensitivity analysis to evaluate the impact of unmeasured confounding with external data: a real world comparative effectiveness study in osteoporosis".

    PubMed

    Zhang, Xiang; Faries, Douglas E; Boytsov, Natalie; Stamey, James D; Seaman, John W

    2016-09-01

    Observational studies are frequently used to assess the effectiveness of medical interventions in routine clinical practice. However, the use of observational data for comparative effectiveness is challenged by selection bias and the potential of unmeasured confounding. This is especially problematic for analyses using a health care administrative database, in which key clinical measures are often not available. This paper provides an approach to conducting a sensitivity analyses to investigate the impact of unmeasured confounding in observational studies. In a real world osteoporosis comparative effectiveness study, the bone mineral density (BMD) score, an important predictor of fracture risk and a factor in the selection of osteoporosis treatments, is unavailable in the data base and lack of baseline BMD could potentially lead to significant selection bias. We implemented Bayesian twin-regression models, which simultaneously model both the observed outcome and the unobserved unmeasured confounder, using information from external sources. A sensitivity analysis was also conducted to assess the robustness of our conclusions to changes in such external data. The use of Bayesian modeling in this study suggests that the lack of baseline BMD did have a strong impact on the analysis, reversing the direction of the estimated effect (odds ratio of fracture incidence at 24 months: 0.40 vs. 1.36, with/without adjusting for unmeasured baseline BMD). The Bayesian twin-regression models provide a flexible sensitivity analysis tool to quantitatively assess the impact of unmeasured confounding in observational studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Validity of self-reported stroke in elderly African Americans, Caribbean Hispanics, and Whites.

    PubMed

    Reitz, Christiane; Schupf, Nicole; Luchsinger, José A; Brickman, Adam M; Manly, Jennifer J; Andrews, Howard; Tang, Ming X; DeCarli, Charles; Brown, Truman R; Mayeux, Richard

    2009-07-01

    The validity of a self-reported stroke remains inconclusive. To validate the diagnosis of self-reported stroke using stroke identified by magnetic resonance imaging (MRI) as the standard. Community-based cohort study of nondemented, ethnically diverse elderly persons in northern Manhattan. High-resolution quantitative MRIs were acquired for 717 participants without dementia. Sensitivity and specificity of stroke by self-report were examined using cross-sectional analyses and the chi(2) test. Putative relationships between factors potentially influencing the reporting of stroke, including memory performance, cognitive function, and vascular risk factors, were assessed using logistic regression models. Subsequently, all analyses were repeated, stratified by age, sex, ethnic group, and level of education. In analyses of the whole sample, sensitivity of stroke self-report for a diagnosis of stroke on MRI was 32.4%, and specificity was 78.9%. In analyses stratified by median age (80.1 years), the validity between reported stroke and detection of stroke on MRI was significantly better in the younger than the older age group (for all vascular territories: sensitivity and specificity, 36.7% and 81.3% vs 27.6% and 26.2%; P = .02). Impaired memory, cognitive skills, or language ability and the presence of hypertension or myocardial infarction were associated with higher rates of false-negative results. Using brain MRI as the standard, specificity and sensitivity of stroke self-report are low. Accuracy of self-report is influenced by age, presence of vascular disease, and cognitive function. In stroke research, sensitive neuroimaging techniques rather than stroke self-report should be used to determine stroke history.

  15. Between- and within-lake responses of macrophyte richness metrics to shoreline developmen

    USGS Publications Warehouse

    Beck, Marcus W.; Vondracek, Bruce C.; Hatch, Lorin K.

    2013-01-01

    Aquatic habitat in littoral environments can be affected by residential development of shoreline areas. We evaluated the relationship between macrophyte richness metrics and shoreline development to quantify indicator response at 2 spatial scales for Minnesota lakes. First, the response of total, submersed, and sensitive species to shoreline development was evaluated within lakes to quantify macrophyte response as a function of distance to the nearest dock. Within-lake analyses using generalized linear mixed models focused on 3 lakes of comparable size with a minimal influence of watershed land use. Survey points farther from docks had higher total species richness and presence of species sensitive to disturbance. Second, between-lake effects of shoreline development on total, submersed, emergent-floating, and sensitive species were evaluated for 1444 lakes. Generalized linear models were developed for all lakes and stratified subsets to control for lake depth and watershed land use. Between-lake analyses indicated a clear response of macrophyte richness metrics to increasing shoreline development, such that fewer emergent-floating and sensitive species were correlated with increasing density of docks. These trends were particularly evident for deeper lakes with lower watershed development. Our results provide further evidence that shoreline development is associated with degraded aquatic habitat, particularly by illustrating the response of macrophyte richness metrics across multiple lake types and different spatial scales.

  16. Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model

    NASA Astrophysics Data System (ADS)

    Urrego-Blanco, Jorge R.; Urban, Nathan M.; Hunke, Elizabeth C.; Turner, Adrian K.; Jeffery, Nicole

    2016-04-01

    Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. It is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.

  17. Cost Analyses in the US and Japan: A Cross-Country Comparative Analysis Applied to the PRONOUNCE Trial in Non-Squamous Non-Small Cell Lung Cancer.

    PubMed

    Hess, Lisa M; Rajan, Narayan; Winfree, Katherine; Davey, Peter; Ball, Mark; Knox, Hediyyih; Graham, Christopher

    2015-12-01

    Health technology assessment is not required for regulatory submission or approval in either the United States (US) or Japan. This study was designed as a cross-country evaluation of cost analyses conducted in the US and Japan based on the PRONOUNCE phase III lung cancer trial, which compared pemetrexed plus carboplatin followed by pemetrexed (PemC) versus paclitaxel plus carboplatin plus bevacizumab followed by bevacizumab (PCB). Two cost analyses were conducted in accordance with International Society For Pharmacoeconomics and Outcomes Research good research practice standards. Costs were obtained based on local pricing structures; outcomes were considered equivalent based on the PRONOUNCE trial results. Other inputs were included from the trial data (e.g., toxicity rates) or from local practice sources (e.g., toxicity management). The models were compared across key input and transferability factors. Despite differences in local input data, both models demonstrated a similar direction, with the cost of PemC being consistently lower than the cost of PCB. The variation in individual input parameters did affect some of the specific categories, such as toxicity, and impacted sensitivity analyses, with the cost differential between comparators being greater in Japan than in the US. When economic models are based on clinical trial data, many inputs and outcomes are held consistent. The alterable inputs were not in and of themselves large enough to significantly impact the results between countries, which were directionally consistent with greater variation seen in sensitivity analyses. The factors that vary across jurisdictions, even when minor, can have an impact on trial-based economic analyses. Eli Lilly and Company.

  18. Sensitivity of physical parameterizations on prediction of tropical cyclone Nargis over the Bay of Bengal using WRF model

    NASA Astrophysics Data System (ADS)

    Raju, P. V. S.; Potty, Jayaraman; Mohanty, U. C.

    2011-09-01

    Comprehensive sensitivity analyses on physical parameterization schemes of Weather Research Forecast (WRF-ARW core) model have been carried out for the prediction of track and intensity of tropical cyclones by taking the example of cyclone Nargis, which formed over the Bay of Bengal and hit Myanmar on 02 May 2008, causing widespread damages in terms of human and economic losses. The model performances are also evaluated with different initial conditions of 12 h intervals starting from the cyclogenesis to the near landfall time. The initial and boundary conditions for all the model simulations are drawn from the global operational analysis and forecast products of National Center for Environmental Prediction (NCEP-GFS) available for the public at 1° lon/lat resolution. The results of the sensitivity analyses indicate that a combination of non-local parabolic type exchange coefficient PBL scheme of Yonsei University (YSU), deep and shallow convection scheme with mass flux approach for cumulus parameterization (Kain-Fritsch), and NCEP operational cloud microphysics scheme with diagnostic mixed phase processes (Ferrier), predicts better track and intensity as compared against the Joint Typhoon Warning Center (JTWC) estimates. Further, the final choice of the physical parameterization schemes selected from the above sensitivity experiments is used for model integration with different initial conditions. The results reveal that the cyclone track, intensity and time of landfall are well simulated by the model with an average intensity error of about 8 hPa, maximum wind error of 12 m s-1and track error of 77 km. The simulations also show that the landfall time error and intensity error are decreasing with delayed initial condition, suggesting that the model forecast is more dependable when the cyclone approaches the coast. The distribution and intensity of rainfall are also well simulated by the model and comparable with the TRMM estimates.

  19. Assessing the dependence of sensitivity and specificity on prevalence in meta-analysis

    PubMed Central

    Li, Jialiang; Fine, Jason P.

    2011-01-01

    We consider modeling the dependence of sensitivity and specificity on the disease prevalence in diagnostic accuracy studies. Many meta-analyses compare test accuracy across studies and fail to incorporate the possible connection between the accuracy measures and the prevalence. We propose a Pearson type correlation coefficient and an estimating equation–based regression framework to help understand such a practical dependence. The results we derive may then be used to better interpret the results from meta-analyses. In the biomedical examples analyzed in this paper, the diagnostic accuracy of biomarkers are shown to be associated with prevalence, providing insights into the utility of these biomarkers in low- and high-prevalence populations. PMID:21525421

  20. Hirabayashi, Satoshi; Kroll, Charles N.; Nowak, David J. 2011. Component-based development and sensitivity analyses of an air pollutant dry deposition model. Environmental Modelling & Software. 26(6): 804-816.

    Treesearch

    Satoshi Hirabayashi; Chuck Kroll; David Nowak

    2011-01-01

    The Urban Forest Effects-Deposition model (UFORE-D) was developed with a component-based modeling approach. Functions of the model were separated into components that are responsible for user interface, data input/output, and core model functions. Taking advantage of the component-based approach, three UFORE-D applications were developed: a base application to estimate...

  1. Variation of a test's sensitivity and specificity with disease prevalence.

    PubMed

    Leeflang, Mariska M G; Rutjes, Anne W S; Reitsma, Johannes B; Hooft, Lotty; Bossuyt, Patrick M M

    2013-08-06

    Anecdotal evidence suggests that the sensitivity and specificity of a diagnostic test may vary with disease prevalence. Our objective was to investigate the associations between disease prevalence and test sensitivity and specificity using studies of diagnostic accuracy. We used data from 23 meta-analyses, each of which included 10-39 studies (416 total). The median prevalence per review ranged from 1% to 77%. We evaluated the effects of prevalence on sensitivity and specificity using a bivariate random-effects model for each meta-analysis, with prevalence as a covariate. We estimated the overall effect of prevalence by pooling the effects using the inverse variance method. Within a given review, a change in prevalence from the lowest to highest value resulted in a corresponding change in sensitivity or specificity from 0 to 40 percentage points. This effect was statistically significant (p < 0.05) for either sensitivity or specificity in 8 meta-analyses (35%). Overall, specificity tended to be lower with higher disease prevalence; there was no such systematic effect for sensitivity. The sensitivity and specificity of a test often vary with disease prevalence; this effect is likely to be the result of mechanisms, such as patient spectrum, that affect prevalence, sensitivity and specificity. Because it may be difficult to identify such mechanisms, clinicians should use prevalence as a guide when selecting studies that most closely match their situation.

  2. Variation of a test’s sensitivity and specificity with disease prevalence

    PubMed Central

    Leeflang, Mariska M.G.; Rutjes, Anne W.S.; Reitsma, Johannes B.; Hooft, Lotty; Bossuyt, Patrick M.M.

    2013-01-01

    Background: Anecdotal evidence suggests that the sensitivity and specificity of a diagnostic test may vary with disease prevalence. Our objective was to investigate the associations between disease prevalence and test sensitivity and specificity using studies of diagnostic accuracy. Methods: We used data from 23 meta-analyses, each of which included 10–39 studies (416 total). The median prevalence per review ranged from 1% to 77%. We evaluated the effects of prevalence on sensitivity and specificity using a bivariate random-effects model for each meta-analysis, with prevalence as a covariate. We estimated the overall effect of prevalence by pooling the effects using the inverse variance method. Results: Within a given review, a change in prevalence from the lowest to highest value resulted in a corresponding change in sensitivity or specificity from 0 to 40 percentage points. This effect was statistically significant (p < 0.05) for either sensitivity or specificity in 8 meta-analyses (35%). Overall, specificity tended to be lower with higher disease prevalence; there was no such systematic effect for sensitivity. Interpretation: The sensitivity and specificity of a test often vary with disease prevalence; this effect is likely to be the result of mechanisms, such as patient spectrum, that affect prevalence, sensitivity and specificity. Because it may be difficult to identify such mechanisms, clinicians should use prevalence as a guide when selecting studies that most closely match their situation. PMID:23798453

  3. Cost-effectiveness analysis of interferon beta-1b for the treatment of patients with a first clinical event suggestive of multiple sclerosis.

    PubMed

    Caloyeras, John P; Zhang, Bin; Wang, Cheng; Eriksson, Marianne; Fredrikson, Sten; Beckmann, Karola; Knappertz, Volker; Pohl, Christoph; Hartung, Hans-Peter; Shah, Dhvani; Miller, Jeffrey D; Sandbrink, Rupert; Lanius, Vivian; Gondek, Kathleen; Russell, Mason W

    2012-05-01

    To assess, from a Swedish societal perspective, the cost effectiveness of interferon β-1b (IFNB-1b) after an initial clinical event suggestive of multiple sclerosis (MS) (ie, early treatment) compared with treatment after onset of clinically definite MS (CDMS) (ie, delayed treatment). A Markov model was developed, using patient level data from the BENEFIT trial and published literature, to estimate health outcomes and costs associated with IFNB-1b for hypothetical cohorts of patients after an initial clinical event suggestive of MS. Health states were defined by Kurtzke Expanded Disability Status Scale (EDSS) scores. Model outcomes included quality-adjusted life years (QALYs), total costs (including both direct and indirect costs), and incremental cost-effectiveness ratios. Sensitivity analyses were performed on key model parameters to assess the robustness of model results. In the base case scenario, early IFNB-1b treatment was economically dominant (ie, less costly and more effective) versus delayed IFNB-1b treatment when QALYs were used as the effectiveness metric. Sensitivity analyses showed that the cost-effectiveness results were sensitive to model time horizon. Compared with the delayed treatment strategy, early treatment of MS was also associated with delayed EDSS progressions, prolonged time to CDMS diagnosis, and a reduction in frequency of relapse. Early treatment with IFNB-1b for a first clinical event suggestive of MS was found to improve patient outcomes while controlling costs. Copyright © 2012 Elsevier HS Journals, Inc. All rights reserved.

  4. Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare

    Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual modelmore » parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.« less

  5. Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model

    DOE PAGES

    Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare; ...

    2016-04-01

    Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual modelmore » parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.« less

  6. Sensitivity analysis of the DRAINWAT model applied to an agricultural watershed in the lower coastal plain, North Carolina, USA

    Treesearch

    Hyunwoo Kim; Devendra M. Amatya; Stephen W. Broome; Dean L. Hesterberg; Minha Choi

    2011-01-01

    The DRAINWAT, DRAINmod for WATershed model, was selected for hydrological modelling to obtain water table depths and drainage outflows at Open Grounds Farm in Carteret County, North Carolina, USA. Six simulated storm events from the study period were compared with the measured data and analysed. Simulation results from the whole study period and selected rainfall...

  7. Retrieval of tropospheric carbon monoxide for the MOPITT experiment

    NASA Astrophysics Data System (ADS)

    Pan, Liwen; Gille, John C.; Edwards, David P.; Bailey, Paul L.; Rodgers, Clive D.

    1998-12-01

    A retrieval method for deriving the tropospheric carbon monoxide (CO) profile and column amount under clear sky conditions has been developed for the Measurements of Pollution In The Troposphere (MOPITT) instrument, scheduled for launch in 1998 onboard the EOS-AM1 satellite. This paper presents a description of the method along with analyses of retrieval information content. These analyses characterize the forward measurement sensitivity, the contribution of a priori information, and the retrieval vertical resolution. Ensembles of tropospheric CO profiles were compiled both from aircraft in situ measurements and from chemical model results and were used in retrieval experiments to characterize the method and to study the sensitivity to different parameters. Linear error analyses were carried out in parallel with the ensemble experiments. Results of these experiments and analyses indicate that MOPITT CO column measurements will have better than 10% precision, and CO profile measurement will have approximately three pieces of independent information that will resolve 3-5 tropospheric layers to approximately 10% precision. These analyses are important for understanding MOPITT data, both for application of data in tropospheric chemistry studies and for comparison with in situ measurements.

  8. Gut Microbiota in a Rat Oral Sensitization Model: Effect of a Cocoa-Enriched Diet

    PubMed Central

    Camps-Bossacoma, Mariona; Pérez-Cano, Francisco J.; Franch, Àngels

    2017-01-01

    Increasing evidence is emerging suggesting a relation between dietary compounds, microbiota, and the susceptibility to allergic diseases, particularly food allergy. Cocoa, a source of antioxidant polyphenols, has shown effects on gut microbiota and the ability to promote tolerance in an oral sensitization model. Taking these facts into consideration, the aim of the present study was to establish the influence of an oral sensitization model, both alone and together with a cocoa-enriched diet, on gut microbiota. Lewis rats were orally sensitized and fed with either a standard or 10% cocoa diet. Faecal microbiota was analysed through metagenomics study. Intestinal IgA concentration was also determined. Oral sensitization produced few changes in intestinal microbiota, but in those rats fed a cocoa diet significant modifications appeared. Decreased bacteria from the Firmicutes and Proteobacteria phyla and a higher percentage of bacteria belonging to the Tenericutes and Cyanobacteria phyla were observed. In conclusion, a cocoa diet is able to modify the microbiota bacterial pattern in orally sensitized animals. As cocoa inhibits the synthesis of specific antibodies and also intestinal IgA, those changes in microbiota pattern, particularly those of the Proteobacteria phylum, might be partially responsible for the tolerogenic effect of cocoa. PMID:28239436

  9. Gut Microbiota in a Rat Oral Sensitization Model: Effect of a Cocoa-Enriched Diet.

    PubMed

    Camps-Bossacoma, Mariona; Pérez-Cano, Francisco J; Franch, Àngels; Castell, Margarida

    2017-01-01

    Increasing evidence is emerging suggesting a relation between dietary compounds, microbiota, and the susceptibility to allergic diseases, particularly food allergy. Cocoa, a source of antioxidant polyphenols, has shown effects on gut microbiota and the ability to promote tolerance in an oral sensitization model. Taking these facts into consideration, the aim of the present study was to establish the influence of an oral sensitization model, both alone and together with a cocoa-enriched diet, on gut microbiota. Lewis rats were orally sensitized and fed with either a standard or 10% cocoa diet. Faecal microbiota was analysed through metagenomics study. Intestinal IgA concentration was also determined. Oral sensitization produced few changes in intestinal microbiota, but in those rats fed a cocoa diet significant modifications appeared. Decreased bacteria from the Firmicutes and Proteobacteria phyla and a higher percentage of bacteria belonging to the Tenericutes and Cyanobacteria phyla were observed. In conclusion, a cocoa diet is able to modify the microbiota bacterial pattern in orally sensitized animals. As cocoa inhibits the synthesis of specific antibodies and also intestinal IgA, those changes in microbiota pattern, particularly those of the Proteobacteria phylum, might be partially responsible for the tolerogenic effect of cocoa.

  10. Uncertainty Quantification and Sensitivity Analysis in the CICE v5.1 Sea Ice Model

    NASA Astrophysics Data System (ADS)

    Urrego-Blanco, J. R.; Urban, N. M.

    2015-12-01

    Changes in the high latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with mid latitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. In this work we characterize parametric uncertainty in Los Alamos Sea Ice model (CICE) and quantify the sensitivity of sea ice area, extent and volume with respect to uncertainty in about 40 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one-at-a-time, this study uses a global variance-based approach in which Sobol sequences are used to efficiently sample the full 40-dimensional parameter space. This approach requires a very large number of model evaluations, which are expensive to run. A more computationally efficient approach is implemented by training and cross-validating a surrogate (emulator) of the sea ice model with model output from 400 model runs. The emulator is used to make predictions of sea ice extent, area, and volume at several model configurations, which are then used to compute the Sobol sensitivity indices of the 40 parameters. A ranking based on the sensitivity indices indicates that model output is most sensitive to snow parameters such as conductivity and grain size, and the drainage of melt ponds. The main effects and interactions among the most influential parameters are also estimated by a non-parametric regression technique based on generalized additive models. It is recommended research to be prioritized towards more accurately determining these most influential parameters values by observational studies or by improving existing parameterizations in the sea ice model.

  11. An efficient sensitivity analysis method for modified geometry of Macpherson suspension based on Pearson correlation coefficient

    NASA Astrophysics Data System (ADS)

    Shojaeefard, Mohammad Hasan; Khalkhali, Abolfazl; Yarmohammadisatri, Sadegh

    2017-06-01

    The main purpose of this paper is to propose a new method for designing Macpherson suspension, based on the Sobol indices in terms of Pearson correlation which determines the importance of each member on the behaviour of vehicle suspension. The formulation of dynamic analysis of Macpherson suspension system is developed using the suspension members as the modified links in order to achieve the desired kinematic behaviour. The mechanical system is replaced with an equivalent constrained links and then kinematic laws are utilised to obtain a new modified geometry of Macpherson suspension. The equivalent mechanism of Macpherson suspension increased the speed of analysis and reduced its complexity. The ADAMS/CAR software is utilised to simulate a full vehicle, Renault Logan car, in order to analyse the accuracy of modified geometry model. An experimental 4-poster test rig is considered for validating both ADAMS/CAR simulation and analytical geometry model. Pearson correlation coefficient is applied to analyse the sensitivity of each suspension member according to vehicle objective functions such as sprung mass acceleration, etc. Besides this matter, the estimation of Pearson correlation coefficient between variables is analysed in this method. It is understood that the Pearson correlation coefficient is an efficient method for analysing the vehicle suspension which leads to a better design of Macpherson suspension system.

  12. Tactile sensitivity of gloved hands in the cold operation.

    PubMed

    Geng, Q; Kuklane, K; Holmér, I

    1997-11-01

    In this study, tactile sensitivity of gloved hand in the cold operation has been investigated. The relations among physical properties of protective gloves and hand tactile sensitivity and cold protection were also analysed both objectively and subjectively. Subjects with various gloves participated in the experimental study during cold exposure at different ambient temperatures of -12 degrees C and -25 degrees C. Tactual performance was measured using an identification task with various sizes of objects over the percentage of misjudgment. Forearm, hand and finger skin temperatures were also recorded throughout. The experimental data were analysed using analysis of variance (ANOVA) model and the Tukey's multiple range test. The results obtained indicated that the tactual performance was affected both by gloves and by hands/fingers cooling. Effect of object size on the tactile discrimination was significant and the misjudgment increased when similar sizes of objects were identified, especially at -25 degrees C.

  13. A generalised individual-based algorithm for modelling the evolution of quantitative herbicide resistance in arable weed populations.

    PubMed

    Liu, Chun; Bridges, Melissa E; Kaundun, Shiv S; Glasgow, Les; Owen, Micheal Dk; Neve, Paul

    2017-02-01

    Simulation models are useful tools for predicting and comparing the risk of herbicide resistance in weed populations under different management strategies. Most existing models assume a monogenic mechanism governing herbicide resistance evolution. However, growing evidence suggests that herbicide resistance is often inherited in a polygenic or quantitative fashion. Therefore, we constructed a generalised modelling framework to simulate the evolution of quantitative herbicide resistance in summer annual weeds. Real-field management parameters based on Amaranthus tuberculatus (Moq.) Sauer (syn. rudis) control with glyphosate and mesotrione in Midwestern US maize-soybean agroecosystems demonstrated that the model can represent evolved herbicide resistance in realistic timescales. Sensitivity analyses showed that genetic and management parameters were impactful on the rate of quantitative herbicide resistance evolution, whilst biological parameters such as emergence and seed bank mortality were less important. The simulation model provides a robust and widely applicable framework for predicting the evolution of quantitative herbicide resistance in summer annual weed populations. The sensitivity analyses identified weed characteristics that would favour herbicide resistance evolution, including high annual fecundity, large resistance phenotypic variance and pre-existing herbicide resistance. Implications for herbicide resistance management and potential use of the model are discussed. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  14. Sensitivity of productivity and respiration to water availability determines the net ecosystem exchange of carbon terrestrial ecosystems of the United States

    NASA Astrophysics Data System (ADS)

    Liu, Z.; Ballantyne, A.; Poulter, B.; Anderegg, W.; Jacobson, A. R.; Miller, J. B.

    2017-12-01

    Interannual variability (IAV) of atmospheric CO2 is primarily driven by fluctuations in net carbon exchange (NEE) by terrestrial ecosystems. Recent analyses suggested that global terrestrial carbon uptake is dominated by the sensitivity of productivity to precipitation in semi-arid ecosystems, or sensitivity of respiration to temperature in tropical ecosystems. There is a need to better understand factors that control the carbon balance of land ecosystems across spatial and temporal scales. Here we used multiple observational dataset to assess: (1) What are the dominant processes controlling the IAV of NEE in terrestrial ecosystem? What are the climatic controls on the variability gross primary productivity (GPP) and total ecosystem respiration (TER) in the contiguous United States (CONUS). Our analysis revealed that there is a strong positive correlation between IAV of GPP and IAV of NEE in drier (mean annual precipitation: MAP < 750mm) western ecosystem, while there is no correlation between IAV of GPP and IAV of NEE in moist (MAP > 750mm) eastern ecosystem using observational dataset. Both βspatial and βtemporal of GPP and TER to precipitation exhibit an emergent threshold where GPP is more sensitive than TER to precipitation in semi-arid western ecosystems and TER is more sensitive than GPP to precipitation in more humid eastern ecosystems. This emergent ecosystem threshold was evident in several independent observations. However, analyses from 10 TRENDY models indicate current Dynamic Global Vegetation Models (DGVMs) tend to overestimate the sensitivity of NEE to GPP and underestimate the sensitivity of NEE to TER to precipitation across CONUS ecosystems. TER experiments showed that commonly used TER models failed to capture the IAV of TER in the moist region in CONUS. This is because heterotrophic respiration (Rh) was relatively independent of GPP in moist regions of CONUS, but was too tightly coupled to GPP in the DGVMs. The emergent thresholds at the ecosystem and continental scale may help reconcile model simulations and observations of terrestrial carbon processes.

  15. An analysis of sensitivity of CLIMEX parameters in mapping species potential distribution and the broad-scale changes observed with minor variations in parameters values: an investigation using open-field Solanum lycopersicum and Neoleucinodes elegantalis as an example

    NASA Astrophysics Data System (ADS)

    da Silva, Ricardo Siqueira; Kumar, Lalit; Shabani, Farzin; Picanço, Marcelo Coutinho

    2018-04-01

    A sensitivity analysis can categorize levels of parameter influence on a model's output. Identifying parameters having the most influence facilitates establishing the best values for parameters of models, providing useful implications in species modelling of crops and associated insect pests. The aim of this study was to quantify the response of species models through a CLIMEX sensitivity analysis. Using open-field Solanum lycopersicum and Neoleucinodes elegantalis distribution records, and 17 fitting parameters, including growth and stress parameters, comparisons were made in model performance by altering one parameter value at a time, in comparison to the best-fit parameter values. Parameters that were found to have a greater effect on the model results are termed "sensitive". Through the use of two species, we show that even when the Ecoclimatic Index has a major change through upward or downward parameter value alterations, the effect on the species is dependent on the selection of suitability categories and regions of modelling. Two parameters were shown to have the greatest sensitivity, dependent on the suitability categories of each species in the study. Results enhance user understanding of which climatic factors had a greater impact on both species distributions in our model, in terms of suitability categories and areas, when parameter values were perturbed by higher or lower values, compared to the best-fit parameter values. Thus, the sensitivity analyses have the potential to provide additional information for end users, in terms of improving management, by identifying the climatic variables that are most sensitive.

  16. Application of neural networks and sensitivity analysis to improved prediction of trauma survival.

    PubMed

    Hunter, A; Kennedy, L; Henry, J; Ferguson, I

    2000-05-01

    The performance of trauma departments is widely audited by applying predictive models that assess probability of survival, and examining the rate of unexpected survivals and deaths. Although the TRISS methodology, a logistic regression modelling technique, is still the de facto standard, it is known that neural network models perform better. A key issue when applying neural network models is the selection of input variables. This paper proposes a novel form of sensitivity analysis, which is simpler to apply than existing techniques, and can be used for both numeric and nominal input variables. The technique is applied to the audit survival problem, and used to analyse the TRISS variables. The conclusions discuss the implications for the design of further improved scoring schemes and predictive models.

  17. Violent video game effects on aggression, empathy, and prosocial behavior in eastern and western countries: a meta-analytic review.

    PubMed

    Anderson, Craig A; Shibuya, Akiko; Ihori, Nobuko; Swing, Edward L; Bushman, Brad J; Sakamoto, Akira; Rothstein, Hannah R; Saleem, Muniba

    2010-03-01

    Meta-analytic procedures were used to test the effects of violent video games on aggressive behavior, aggressive cognition, aggressive affect, physiological arousal, empathy/desensitization, and prosocial behavior. Unique features of this meta-analytic review include (a) more restrictive methodological quality inclusion criteria than in past meta-analyses; (b) cross-cultural comparisons; (c) longitudinal studies for all outcomes except physiological arousal; (d) conservative statistical controls; (e) multiple moderator analyses; and (f) sensitivity analyses. Social-cognitive models and cultural differences between Japan and Western countries were used to generate theory-based predictions. Meta-analyses yielded significant effects for all 6 outcome variables. The pattern of results for different outcomes and research designs (experimental, cross-sectional, longitudinal) fit theoretical predictions well. The evidence strongly suggests that exposure to violent video games is a causal risk factor for increased aggressive behavior, aggressive cognition, and aggressive affect and for decreased empathy and prosocial behavior. Moderator analyses revealed significant research design effects, weak evidence of cultural differences in susceptibility and type of measurement effects, and no evidence of sex differences in susceptibility. Results of various sensitivity analyses revealed these effects to be robust, with little evidence of selection (publication) bias.

  18. Health economic evaluation of Human Papillomavirus vaccines in women from Venezuela by a lifetime Markov cohort model.

    PubMed

    Bardach, Ariel Esteban; Garay, Osvaldo Ulises; Calderón, María; Pichón-Riviére, Andrés; Augustovski, Federico; Martí, Sebastián García; Cortiñas, Paula; Gonzalez, Marino; Naranjo, Laura T; Gomez, Jorge Alberto; Caporale, Joaquín Enzo

    2017-02-02

    Cervical cancer (CC) and genital warts (GW) are a significant public health issue in Venezuela. Our objective was to assess the cost-effectiveness of the two available vaccines, bivalent and quadrivalent, against Human Papillomavirus (HPV) in Venezuelan girls in order to inform decision-makers. A previously published Markov cohort model, informed by the best available evidence, was adapted to the Venezuelan context to evaluate the effects of vaccination on health and healthcare costs from the perspective of the healthcare payer in an 11-year-old girls cohort of 264,489. Costs and quality-adjusted life years (QALYs) were discounted at 5%. Eight scenarios were analyzed to depict the cost-effectiveness under alternative vaccine prices, exchange rates and dosing schemes. Deterministic and probabilistic sensitivity analyses were performed. Compared to screening only, the bivalent and quadrivalent vaccines were cost-saving in all scenarios, avoiding 2,310 and 2,143 deaths, 4,781 and 4,431 CCs up to 18,459 GW for the quadrivalent vaccine and gaining 4,486 and 4,395 discounted QALYs respectively. For both vaccines, the main determinants of variations in the incremental costs-effectiveness ratio after running deterministic and probabilistic sensitivity analyses were transition probabilities, vaccine and cancer-treatment costs and HPV 16 and 18 distribution in CC cases. When comparing vaccines, none of them was consistently more cost-effective than the other. In sensitivity analyses, for these comparisons, the main determinants were GW incidence, the level of cross-protection and, for some scenarios, vaccines costs. Immunization with the bivalent or quadrivalent HPV vaccines showed to be cost-saving or cost-effective in Venezuela, falling below the threshold of one Gross Domestic Product (GDP) per capita (104,404 VEF) per QALY gained. Deterministic and probabilistic sensitivity analyses confirmed the robustness of these results.

  19. Bayesian sensitivity analysis methods to evaluate bias due to misclassification and missing data using informative priors and external validation data.

    PubMed

    Luta, George; Ford, Melissa B; Bondy, Melissa; Shields, Peter G; Stamey, James D

    2013-04-01

    Recent research suggests that the Bayesian paradigm may be useful for modeling biases in epidemiological studies, such as those due to misclassification and missing data. We used Bayesian methods to perform sensitivity analyses for assessing the robustness of study findings to the potential effect of these two important sources of bias. We used data from a study of the joint associations of radiotherapy and smoking with primary lung cancer among breast cancer survivors. We used Bayesian methods to provide an operational way to combine both validation data and expert opinion to account for misclassification of the two risk factors and missing data. For comparative purposes we considered a "full model" that allowed for both misclassification and missing data, along with alternative models that considered only misclassification or missing data, and the naïve model that ignored both sources of bias. We identified noticeable differences between the four models with respect to the posterior distributions of the odds ratios that described the joint associations of radiotherapy and smoking with primary lung cancer. Despite those differences we found that the general conclusions regarding the pattern of associations were the same regardless of the model used. Overall our results indicate a nonsignificantly decreased lung cancer risk due to radiotherapy among nonsmokers, and a mildly increased risk among smokers. We described easy to implement Bayesian methods to perform sensitivity analyses for assessing the robustness of study findings to misclassification and missing data. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. A spectral power analysis of driving behavior changes during the transition from nondistraction to distraction.

    PubMed

    Wang, Yuan; Bao, Shan; Du, Wenjun; Ye, Zhirui; Sayer, James R

    2017-11-17

    This article investigated and compared frequency domain and time domain characteristics of drivers' behaviors before and after the start of distracted driving. Data from an existing naturalistic driving study were used. Fast Fourier transform (FFT) was applied for the frequency domain analysis to explore drivers' behavior pattern changes between nondistracted (prestarting of visual-manual task) and distracted (poststarting of visual-manual task) driving periods. Average relative spectral power in a low frequency range (0-0.5 Hz) and the standard deviation in a 10-s time window of vehicle control variables (i.e., lane offset, yaw rate, and acceleration) were calculated and further compared. Sensitivity analyses were also applied to examine the reliability of the time and frequency domain analyses. Results of the mixed model analyses from the time and frequency domain analyses all showed significant degradation in lateral control performance after engaging in visual-manual tasks while driving. Results of the sensitivity analyses suggested that the frequency domain analysis was less sensitive to the frequency bandwidth, whereas the time domain analysis was more sensitive to the time intervals selected for variation calculations. Different time interval selections can result in significantly different standard deviation values, whereas average spectral power analysis on yaw rate in both low and high frequency bandwidths showed consistent results, that higher variation values were observed during distracted driving when compared to nondistracted driving. This study suggests that driver state detection needs to consider the behavior changes during the prestarting periods, instead of only focusing on periods with physical presence of distraction, such as cell phone use. Lateral control measures can be a better indicator of distraction detection than longitudinal controls. In addition, frequency domain analyses proved to be a more robust and consistent method in assessing driving performance compared to time domain analyses.

  1. Global sensitivity analysis in stochastic simulators of uncertain reaction networks.

    PubMed

    Navarro Jimenez, M; Le Maître, O P; Knio, O M

    2016-12-28

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.

  2. Global sensitivity analysis in stochastic simulators of uncertain reaction networks

    DOE PAGES

    Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.

    2016-12-23

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol’s decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes thatmore » the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. Here, a sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.« less

  3. Global sensitivity analysis in stochastic simulators of uncertain reaction networks

    NASA Astrophysics Data System (ADS)

    Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.

    2016-12-01

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.

  4. A framework for improving a seasonal hydrological forecasting system using sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Arnal, Louise; Pappenberger, Florian; Smith, Paul; Cloke, Hannah

    2017-04-01

    Seasonal streamflow forecasts are of great value for the socio-economic sector, for applications such as navigation, flood and drought mitigation and reservoir management for hydropower generation and water allocation to agriculture and drinking water. However, as we speak, the performance of dynamical seasonal hydrological forecasting systems (systems based on running seasonal meteorological forecasts through a hydrological model to produce seasonal hydrological forecasts) is still limited in space and time. In this context, the ESP (Ensemble Streamflow Prediction) remains an attractive forecasting method for seasonal streamflow forecasting as it relies on forcing a hydrological model (starting from the latest observed or simulated initial hydrological conditions) with historical meteorological observations. This makes it cheaper to run than a standard dynamical seasonal hydrological forecasting system, for which the seasonal meteorological forecasts will first have to be produced, while still producing skilful forecasts. There is thus the need to focus resources and time towards improvements in dynamical seasonal hydrological forecasting systems which will eventually lead to significant improvements in the skill of the streamflow forecasts generated. Sensitivity analyses are a powerful tool that can be used to disentangle the relative contributions of the two main sources of errors in seasonal streamflow forecasts, namely the initial hydrological conditions (IHC; e.g., soil moisture, snow cover, initial streamflow, among others) and the meteorological forcing (MF; i.e., seasonal meteorological forecasts of precipitation and temperature, input to the hydrological model). Sensitivity analyses are however most useful if they inform and change current operational practices. To this end, we propose a method to improve the design of a seasonal hydrological forecasting system. This method is based on sensitivity analyses, informing the forecasters as to which element of the forecasting chain (i.e., IHC or MF) could potentially lead to the highest increase in seasonal hydrological forecasting performance, after each forecast update.

  5. Does equity sensitivity moderate the relationship between effort-reward imbalance and burnout.

    PubMed

    Oren, Lior; Littman-Ovadia, Hadassah

    2013-01-01

    The model of effort-reward imbalance (ERI) received considerable research attention in the job stress literature. However, very scarce research investigated individual differences as moderators between ERI and stress. The present study is aimed at examining the combined effects of ERI, overcommitment (OVC), and the interaction between ERI and overcommitment on burnout (i.e., emotional exhaustion, cynicism, and inefficacy) and the moderating role of equity sensitivity. A questionnaire measuring ERI, burnout, and equity sensitivity was administered to 159 employees. Regression analyses were conducted to test the proposed relations and moderating hypotheses. ERI was negatively related to inefficacy and overcommitment was positively related to emotional exhaustion and cynicism. In addition, equity sensitivity was found to moderate the effect of overcommitment on emotional exhaustion and inefficacy. The findings emphasize the detrimental effect overcommitment may have on employee's mental health and suggest that the ERI model components may be closely related to perceptions of organizational justice.

  6. Sensitivity derivatives for advanced CFD algorithm and viscous modelling parameters via automatic differentiation

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Newman, Perry A.; Haigler, Kara J.

    1993-01-01

    The computational technique of automatic differentiation (AD) is applied to a three-dimensional thin-layer Navier-Stokes multigrid flow solver to assess the feasibility and computational impact of obtaining exact sensitivity derivatives typical of those needed for sensitivity analyses. Calculations are performed for an ONERA M6 wing in transonic flow with both the Baldwin-Lomax and Johnson-King turbulence models. The wing lift, drag, and pitching moment coefficients are differentiated with respect to two different groups of input parameters. The first group consists of the second- and fourth-order damping coefficients of the computational algorithm, whereas the second group consists of two parameters in the viscous turbulent flow physics modelling. Results obtained via AD are compared, for both accuracy and computational efficiency with the results obtained with divided differences (DD). The AD results are accurate, extremely simple to obtain, and show significant computational advantage over those obtained by DD for some cases.

  7. Mixture models in diagnostic meta-analyses--clustering summary receiver operating characteristic curves accounted for heterogeneity and correlation.

    PubMed

    Schlattmann, Peter; Verba, Maryna; Dewey, Marc; Walther, Mario

    2015-01-01

    Bivariate linear and generalized linear random effects are frequently used to perform a diagnostic meta-analysis. The objective of this article was to apply a finite mixture model of bivariate normal distributions that can be used for the construction of componentwise summary receiver operating characteristic (sROC) curves. Bivariate linear random effects and a bivariate finite mixture model are used. The latter model is developed as an extension of a univariate finite mixture model. Two examples, computed tomography (CT) angiography for ruling out coronary artery disease and procalcitonin as a diagnostic marker for sepsis, are used to estimate mean sensitivity and mean specificity and to construct sROC curves. The suggested approach of a bivariate finite mixture model identifies two latent classes of diagnostic accuracy for the CT angiography example. Both classes show high sensitivity but mainly two different levels of specificity. For the procalcitonin example, this approach identifies three latent classes of diagnostic accuracy. Here, sensitivities and specificities are quite different as such that sensitivity increases with decreasing specificity. Additionally, the model is used to construct componentwise sROC curves and to classify individual studies. The proposed method offers an alternative approach to model between-study heterogeneity in a diagnostic meta-analysis. Furthermore, it is possible to construct sROC curves even if a positive correlation between sensitivity and specificity is present. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Atmospheric Dispersal and Dispostion of Tephra From a Potential Volcanic Eruption at Yucca Mountain, Nevada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    G. Keating; W.Statham

    2004-02-12

    The purpose of this model report is to provide documentation of the conceptual and mathematical model (ASHPLUME) for atmospheric dispersal and subsequent deposition of ash on the land surface from a potential volcanic eruption at Yucca Mountain, Nevada. This report also documents the ash (tephra) redistribution conceptual model. The ASHPLUME conceptual model accounts for incorporation and entrainment of waste fuel particles associated with a hypothetical volcanic eruption through the Yucca Mountain repository and downwind transport of contaminated tephra. The ASHPLUME mathematical model describes the conceptual model in mathematical terms to allow for prediction of radioactive waste/ash deposition on the groundmore » surface given that the hypothetical eruptive event occurs. This model report also describes the conceptual model for tephra redistribution from a basaltic cinder cone. Sensitivity analyses and model validation activities for the ash dispersal and redistribution models are also presented. Analyses documented in this model report will improve and clarify the previous documentation of the ASHPLUME mathematical model and its application to the Total System Performance Assessment (TSPA) for the License Application (TSPA-LA) igneous scenarios. This model report also documents the redistribution model product outputs based on analyses to support the conceptual model.« less

  9. The Impact of Assimilating Precipitation-affected Radiance on Cloud and Precipitation in Goddard WRF-EDAS Analyses

    NASA Technical Reports Server (NTRS)

    Lin, Xin; Zhang, Sara Q.; Zupanski, M.; Hou, Arthur Y.; Zhang, J.

    2015-01-01

    High-frequency TMI and AMSR-E radiances, which are sensitive to precipitation over land, are assimilated into the Goddard Weather Research and Forecasting Model- Ensemble Data Assimilation System (WRF-EDAS) for a few heavy rain events over the continental US. Independent observations from surface rainfall, satellite IR brightness temperatures, as well as ground-radar reflectivity profiles are used to evaluate the impact of assimilating rain-sensitive radiances on cloud and precipitation within WRF-EDAS. The evaluations go beyond comparisons of forecast skills and domain-mean statistics, and focus on studying the cloud and precipitation features in the jointed rainradiance and rain-cloud space, with particular attentions on vertical distributions of height-dependent cloud types and collective effect of cloud hydrometers. Such a methodology is very helpful to understand limitations and sources of errors in rainaffected radiance assimilations. It is found that the assimilation of rain-sensitive radiances can reduce the mismatch between model analyses and observations by reasonably enhancing/reducing convective intensity over areas where the observation indicates precipitation, and suppressing convection over areas where the model forecast indicates rain but the observation does not. It is also noted that instead of generating sufficient low-level warmrain clouds as in observations, the model analysis tends to produce many spurious upperlevel clouds containing small amount of ice water content. This discrepancy is associated with insufficient information in ice-water-sensitive radiances to address the vertical distribution of clouds with small amount of ice water content. Such a problem will likely be mitigated when multi-channel multi-frequency radiances/reflectivity are assimilated over land along with sufficiently accurate surface emissivity information to better constrain the vertical distribution of cloud hydrometers.

  10. Machine Learning Predictions of a Multiresolution Climate Model Ensemble

    NASA Astrophysics Data System (ADS)

    Anderson, Gemma J.; Lucas, Donald D.

    2018-05-01

    Statistical models of high-resolution climate models are useful for many purposes, including sensitivity and uncertainty analyses, but building them can be computationally prohibitive. We generated a unique multiresolution perturbed parameter ensemble of a global climate model. We use a novel application of a machine learning technique known as random forests to train a statistical model on the ensemble to make high-resolution model predictions of two important quantities: global mean top-of-atmosphere energy flux and precipitation. The random forests leverage cheaper low-resolution simulations, greatly reducing the number of high-resolution simulations required to train the statistical model. We demonstrate that high-resolution predictions of these quantities can be obtained by training on an ensemble that includes only a small number of high-resolution simulations. We also find that global annually averaged precipitation is more sensitive to resolution changes than to any of the model parameters considered.

  11. Enhanced Photoacoustic Gas Analyser Response Time and Impact on Accuracy at Fast Ventilation Rates during Multiple Breath Washout

    PubMed Central

    Horsley, Alex; Macleod, Kenneth; Gupta, Ruchi; Goddard, Nick; Bell, Nicholas

    2014-01-01

    Background The Innocor device contains a highly sensitive photoacoustic gas analyser that has been used to perform multiple breath washout (MBW) measurements using very low concentrations of the tracer gas SF6. Use in smaller subjects has been restricted by the requirement for a gas analyser response time of <100 ms, in order to ensure accurate estimation of lung volumes at rapid ventilation rates. Methods A series of previously reported and novel enhancements were made to the gas analyser to produce a clinically practical system with a reduced response time. An enhanced lung model system, capable of delivering highly accurate ventilation rates and volumes, was used to assess in vitro accuracy of functional residual capacity (FRC) volume calculation and the effects of flow and gas signal alignment on this. Results 10–90% rise time was reduced from 154 to 88 ms. In an adult/child lung model, accuracy of volume calculation was −0.9 to 2.9% for all measurements, including those with ventilation rate of 30/min and FRC of 0.5 L; for the un-enhanced system, accuracy deteriorated at higher ventilation rates and smaller FRC. In a separate smaller lung model (ventilation rate 60/min, FRC 250 ml, tidal volume 100 ml), mean accuracy of FRC measurement for the enhanced system was minus 0.95% (range −3.8 to 2.0%). Error sensitivity to flow and gas signal alignment was increased by ventilation rate, smaller FRC and slower analyser response time. Conclusion The Innocor analyser can be enhanced to reliably generate highly accurate FRC measurements down at volumes as low as those simulating infant lung settings. Signal alignment is a critical factor. With these enhancements, the Innocor analyser exceeds key technical component recommendations for MBW apparatus. PMID:24892522

  12. Evaluation and recommendation of sensitivity analysis methods for application to Stochastic Human Exposure and Dose Simulation models.

    PubMed

    Mokhtari, Amirhossein; Christopher Frey, H; Zheng, Junyu

    2006-11-01

    Sensitivity analyses of exposure or risk models can help identify the most significant factors to aid in risk management or to prioritize additional research to reduce uncertainty in the estimates. However, sensitivity analysis is challenged by non-linearity, interactions between inputs, and multiple days or time scales. Selected sensitivity analysis methods are evaluated with respect to their applicability to human exposure models with such features using a testbed. The testbed is a simplified version of a US Environmental Protection Agency's Stochastic Human Exposure and Dose Simulation (SHEDS) model. The methods evaluated include the Pearson and Spearman correlation, sample and rank regression, analysis of variance, Fourier amplitude sensitivity test (FAST), and Sobol's method. The first five methods are known as "sampling-based" techniques, wheras the latter two methods are known as "variance-based" techniques. The main objective of the test cases was to identify the main and total contributions of individual inputs to the output variance. Sobol's method and FAST directly quantified these measures of sensitivity. Results show that sensitivity of an input typically changed when evaluated under different time scales (e.g., daily versus monthly). All methods provided similar insights regarding less important inputs; however, Sobol's method and FAST provided more robust insights with respect to sensitivity of important inputs compared to the sampling-based techniques. Thus, the sampling-based methods can be used in a screening step to identify unimportant inputs, followed by application of more computationally intensive refined methods to a smaller set of inputs. The implications of time variation in sensitivity results for risk management are briefly discussed.

  13. Cost effectiveness of imatinib compared with interferon-alpha or hydroxycarbamide for first-line treatment of chronic myeloid leukaemia.

    PubMed

    Dalziel, Kim; Round, Ali; Garside, Ruth; Stein, Ken

    2005-01-01

    To evaluate the cost utility of imatinib compared with interferon (IFN)-alpha or hydroxycarbamide (hydroxyurea) for first-line treatment of chronic myeloid leukaemia. A cost-utility (Markov) model within the setting of the UK NHS and viewed from a health system perspective was adopted. Transition probabilities and relative risks were estimated from published literature. Costs of drug treatment, outpatient care, bone marrow biopsies, radiography, blood transfusions and inpatient care were obtained from the British National Formulary and local hospital databases. Costs (pound, year 2001-03 values) were discounted at 6%. Quality-of-life (QOL) data were obtained from the published literature and discounted at 1.5%. The main outcome measure was cost per QALY gained. Extensive one-way sensitivity analyses were performed along with probabilistic (stochastic) analysis. The incremental cost-effectiveness ratio (ICER) of imatinib, compared with IFNalpha, was pound26,180 per QALY gained (one-way sensitivity analyses ranged from pound19,449 to pound51,870) and compared with hydroxycarbamide was pound86,934 per QALY (one-way sensitivity analyses ranged from pound69,701 to pound147,095) [ pound1=$US1.691=euro1.535 as at 31 December 2002].Based on the probabilistic sensitivity analysis, 50% of the ICERs for imatinib, compared with IFNalpha, fell below a threshold of approximately pound31,000 per QALY gained. Fifty percent of ICERs for imatinib, compared with hydroxycarbamide, fell below approximately pound95,000 per QALY gained. This model suggests, given its underlying data and assumptions, that imatinib may be moderately cost effective when compared with IFNalpha but considerably less cost effective when compared with hydroxycarbamide. There are, however, many uncertainties due to the lack of long-term data.

  14. Remote sensing requirements as suggested by watershed model sensitivity analyses

    NASA Technical Reports Server (NTRS)

    Salomonson, V. V.; Rango, A.; Ormsby, J. P.; Ambaruch, R.

    1975-01-01

    A continuous simulation watershed model has been used to perform sensitivity analyses that provide guidance in defining remote sensing requirements for the monitoring of watershed features and processes. The results show that out of 26 input parameters having meaningful effects on simulated runoff, 6 appear to be obtainable with existing remote sensing techniques. Of these six parameters, 3 require the measurement of the areal extent of surface features (impervious areas, water bodies, and the extent of forested area), two require the descrimination of land use that can be related to overland flow roughness coefficient or the density of vegetation so as to estimate the magnitude of precipitation interception, and one parameter requires the measurement of distance to get the length over which overland flow typically occurs. Observational goals are also suggested for monitoring such fundamental watershed processes as precipitation, soil moisture, and evapotranspiration. A case study on the Patuxent River in Maryland shows that runoff simulation is improved if recent satellite land use observations are used as model inputs as opposed to less timely topographic map information.

  15. Evaluation of microarray data normalization procedures using spike-in experiments

    PubMed Central

    Rydén, Patrik; Andersson, Henrik; Landfors, Mattias; Näslund, Linda; Hartmanová, Blanka; Noppa, Laila; Sjöstedt, Anders

    2006-01-01

    Background Recently, a large number of methods for the analysis of microarray data have been proposed but there are few comparisons of their relative performances. By using so-called spike-in experiments, it is possible to characterize the analyzed data and thereby enable comparisons of different analysis methods. Results A spike-in experiment using eight in-house produced arrays was used to evaluate established and novel methods for filtration, background adjustment, scanning, channel adjustment, and censoring. The S-plus package EDMA, a stand-alone tool providing characterization of analyzed cDNA-microarray data obtained from spike-in experiments, was developed and used to evaluate 252 normalization methods. For all analyses, the sensitivities at low false positive rates were observed together with estimates of the overall bias and the standard deviation. In general, there was a trade-off between the ability of the analyses to identify differentially expressed genes (i.e. the analyses' sensitivities) and their ability to provide unbiased estimators of the desired ratios. Virtually all analysis underestimated the magnitude of the regulations; often less than 50% of the true regulations were observed. Moreover, the bias depended on the underlying mRNA-concentration; low concentration resulted in high bias. Many of the analyses had relatively low sensitivities, but analyses that used either the constrained model (i.e. a procedure that combines data from several scans) or partial filtration (a novel method for treating data from so-called not-found spots) had with few exceptions high sensitivities. These methods gave considerable higher sensitivities than some commonly used analysis methods. Conclusion The use of spike-in experiments is a powerful approach for evaluating microarray preprocessing procedures. Analyzed data are characterized by properties of the observed log-ratios and the analysis' ability to detect differentially expressed genes. If bias is not a major problem; we recommend the use of either the CM-procedure or partial filtration. PMID:16774679

  16. Cost effectiveness analysis of immunotherapy in patients with grass pollen allergic rhinoconjunctivitis in Germany.

    PubMed

    Westerhout, K Y; Verheggen, B G; Schreder, C H; Augustin, M

    2012-01-01

    An economic evaluation was conducted to assess the outcomes and costs as well as cost-effectiveness of the following grass-pollen immunotherapies: OA (Oralair; Stallergenes S.A., Antony, France) vs GRZ (Grazax; ALK-Abelló, Hørsholm, Denmark), and ALD (Alk Depot SQ; ALK-Abelló) (immunotherapy agents alongside symptomatic medication) and symptomatic treatment alone for grass pollen allergic rhinoconjunctivitis. The costs and outcomes of 3-year treatment were assessed for a period of 9 years using a Markov model. Treatment efficacy was estimated using an indirect comparison of available clinical trials with placebo as a common comparator. Estimates for immunotherapy discontinuation, occurrence of asthma, health state utilities, drug costs, resource use, and healthcare costs were derived from published sources. The analysis was conducted from the insurant's perspective including public and private health insurance payments and co-payments by insurants. Outcomes were reported as quality-adjusted life years (QALYs) and symptom-free days. The uncertainty around incremental model results was tested by means of extensive deterministic univariate and probabilistic multivariate sensitivity analyses. In the base case analysis the model predicted a cost-utility ratio of OA vs symptomatic treatment of €14,728 per QALY; incremental costs were €1356 (95%CI: €1230; €1484) and incremental QALYs 0.092 (95%CI: 0.052; 0.140). OA was the dominant strategy compared to GRZ and ALD, with estimated incremental costs of -€1142 (95%CI: -€1255; -€1038) and -€54 (95%CI: -€188; €85) and incremental QALYs of 0.015 (95%CI: -0.025; 0.056) and 0.027 (95%CI: -0.022; 0.075), respectively. At a willingness-to-pay threshold of €20,000, the probability of OA being the most cost-effective treatment was predicted to be 79%. Univariate sensitivity analyses show that incremental outcomes were moderately sensitive to changes in efficacy estimates. The main study limitation was the requirement of an indirect comparison involving several steps to assess relative treatment effects. The analysis suggests OA to be cost-effective compared to GRZ and ALD, and a symptomatic treatment. Sensitivity analyses showed that uncertainty surrounding treatment efficacy estimates affected the model outcomes.

  17. Modeling fish community dynamics in Florida Everglades: Role of temperature variation

    USGS Publications Warehouse

    Al-Rabai'ah, H. A.; Koh, H. L.; DeAngelis, Donald L.; Lee, Hooi-Ling

    2002-01-01

    The model shows that the temperature dependent starvation mortality is an important factor that influences fish population densities. It also shows high fish population densities at some temperature ranges when this consumption need is minimum. Several sensitivity analyses involving variations in temperature terms, food resources and water levels are conducted to ascertain the relative importance of temperature dependence terms.

  18. Integrated Patient-Derived Models Delineate Individualized Therapeutic Vulnerabilities of Pancreatic Cancer.

    PubMed

    Witkiewicz, Agnieszka K; Balaji, Uthra; Eslinger, Cody; McMillan, Elizabeth; Conway, William; Posner, Bruce; Mills, Gordon B; O'Reilly, Eileen M; Knudsen, Erik S

    2016-08-16

    Pancreatic ductal adenocarcinoma (PDAC) harbors the worst prognosis of any common solid tumor, and multiple failed clinical trials indicate therapeutic recalcitrance. Here, we use exome sequencing of patient tumors and find multiple conserved genetic alterations. However, the majority of tumors exhibit no clearly defined therapeutic target. High-throughput drug screens using patient-derived cell lines found rare examples of sensitivity to monotherapy, with most models requiring combination therapy. Using PDX models, we confirmed the effectiveness and selectivity of the identified treatment responses. Out of more than 500 single and combination drug regimens tested, no single treatment was effective for the majority of PDAC tumors, and each case had unique sensitivity profiles that could not be predicted using genetic analyses. These data indicate a shortcoming of reliance on genetic analysis to predict efficacy of currently available agents against PDAC and suggest that sensitivity profiling of patient-derived models could inform personalized therapy design for PDAC. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  19. Evaluation of habitat suitability index models by global sensitivity and uncertainty analyses: a case study for submerged aquatic vegetation

    USGS Publications Warehouse

    Zajac, Zuzanna; Stith, Bradley M.; Bowling, Andrea C.; Langtimm, Catherine A.; Swain, Eric D.

    2015-01-01

    Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust decisions.

  20. Optical imaging of RNAi-mediated silencing of cancer

    NASA Astrophysics Data System (ADS)

    Ochiya, Takahiro; Honma, Kimi; Takeshita, Fumitaka; Nagahara, Shunji

    2008-02-01

    RNAi has rapidly become a powerful tool for drug target discovery and validation in an in vitro culture system and, consequently, interest is rapidly growing for extension of its application to in vivo systems, such as animal disease models and human therapeutics. Cancer is one obvious application for RNAi therapeutics, because abnormal gene expression is thought to contribute to the pathogenesis and maintenance of the malignant phenotype of cancer and thereby many oncogenes and cell-signaling molecules present enticing drug target possibilities. RNAi, potent and specific, could silence tumor-related genes and would appear to be a rational approach to inhibit tumor growth. In subsequent in vivo studies, the appropriate cancer model must be developed for an evaluation of siRNA effects on tumors. How to evaluate the effect of siRNA in an in vivo therapeutic model is also important. Accelerating the analyses of these models and improving their predictive value through whole animal imaging methods, which provide cancer inhibition in real time and are sensitive to subtle changes, are crucial for rapid advancement of these approaches. Bioluminescent imaging is one of these optically based imaging methods that enable rapid in vivo analyses of a variety of cellular and molecular events with extreme sensitivity.

  1. RAS testing and cetuximab treatment for metastatic colorectal cancer: a cost-effectiveness analysis in a setting with limited health resources.

    PubMed

    Wu, Bin; Yao, Yuan; Zhang, Ke; Ma, Xuezhen

    2017-09-19

    To test the cost-effectiveness of cetuximab plus irinotecan, fluorouracil, and leucovorin (FOLFIRI) as first-line treatment in patients with metastatic colorectal cancer (mCRC) from a Chinese medical insurance perspective. Baseline analysis showed that the addition of cetuximab increased quality-adjusted life-years (QALYs) by 0.63, an increase of $17,086 relative to FOLFIRI chemotherapy, resulting in an incremental cost-effectiveness ratio (ICER) of $27,145/QALY. When the patient assistance program (PAP) was available, the ICER decreased to $14,049/QALY, which indicated that the cetuximab is cost-effective at a willingness-to-pay threshold of China ($22,200/QALY). One-way sensitivity analyses showed that the median overall survival time for the cetuximab was the most influential parameter. A Markov model by incorporating clinical, utility and cost data was developed to evaluate the economic outcome of cetuximab in mCRC. The lifetime horizon was used, and sensitivity analyses were carried out to test the robustness of the model results. The impact of PAP was also evaluated in scenario analyses. RAS testing with cetuximab treatment is likely to be cost-effective for patients with mCRC when PAP is available in China.

  2. The analysis sensitivity to tropical winds from the Global Weather Experiment

    NASA Technical Reports Server (NTRS)

    Paegle, J.; Paegle, J. N.; Baker, W. E.

    1986-01-01

    The global scale divergent and rotational flow components of the Global Weather Experiment (GWE) are diagnosed from three different analyses of the data. The rotational flow shows closer agreement between the analyses than does the divergent flow. Although the major outflow and inflow centers are similarly placed in all analyses, the global kinetic energy of the divergent wind varies by about a factor of 2 between different analyses while the global kinetic energy of the rotational wind varies by only about 10 percent between the analyses. A series of real data assimilation experiments has been performed with the GLA general circulation model using different amounts of tropical wind data during the First Special Observing Period of the Global Weather Experiment. In exeriment 1, all available tropical wind data were used; in the second experiment, tropical wind data were suppressed; while, in the third and fourth experiments, only tropical wind data with westerly and easterly components, respectively, were assimilated. The rotational wind appears to be more sensitive to the presence or absence of tropical wind data than the divergent wind. It appears that the model, given only extratropical observations, generates excessively strong upper tropospheric westerlies. These biases are sufficiently pronounced to amplify the globally integrated rotational flow kinetic energy by about 10 percent and the global divergent flow kinetic energy by about a factor of 2. Including only easterly wind data in the tropics is more effective in controlling the model error than including only westerly wind data. This conclusion is especially noteworthy because approximately twice as many upper tropospheric westerly winds were available in these cases as easterly winds.

  3. Multireaction equilibrium geothermometry: A sensitivity analysis using data from the Lower Geyser Basin, Yellowstone National Park, USA

    USGS Publications Warehouse

    King, Jonathan M.; Hurwitz, Shaul; Lowenstern, Jacob B.; Nordstrom, D. Kirk; McCleskey, R. Blaine

    2016-01-01

    A multireaction chemical equilibria geothermometry (MEG) model applicable to high-temperature geothermal systems has been developed over the past three decades. Given sufficient data, this model provides more constraint on calculated reservoir temperatures than classical chemical geothermometers that are based on either the concentration of silica (SiO2), or the ratios of cation concentrations. A set of 23 chemical analyses from Ojo Caliente Spring and 22 analyses from other thermal features in the Lower Geyser Basin of Yellowstone National Park are used to examine the sensitivity of calculated reservoir temperatures using the GeoT MEG code (Spycher et al. 2013, 2014) to quantify the effects of solute concentrations, degassing, and mineral assemblages on calculated reservoir temperatures. Results of our analysis demonstrate that the MEG model can resolve reservoir temperatures within approximately ±15°C, and that natural variation in fluid compositions represents a greater source of variance in calculated reservoir temperatures than variations caused by analytical uncertainty (assuming ~5% for major elements). The analysis also suggests that MEG calculations are particularly sensitive to variations in silica concentration, the concentrations of the redox species Fe(II) and H2S, and that the parameters defining steam separation and CO2 degassing from the liquid may be adequately determined by numerical optimization. Results from this study can provide guidance for future applications of MEG models, and thus provide more reliable information on geothermal energy resources during exploration.

  4. Economic evaluation of linaclotide for the treatment of adult patients with irritable bowel syndrome with constipation in the United States.

    PubMed

    Huang, Huan; Taylor, Douglas C A; Carson, Robyn T; Sarocco, Phil; Friedman, Mark; Munsell, Michael; Blum, Steven I; Menzin, Joseph

    2015-04-01

    To use techniques of decision-analytic modeling to evaluate the effectiveness and costs of linaclotide vs lubiprostone in the treatment of adult patients with irritable bowel syndrome with constipation (IBS-C). Using model inputs derived from published literature, linaclotide Phase III trial data and a physician survey, a decision-tree model was constructed. Response to therapy was defined as (1) a ≥ 14-point increase from baseline in IBS-Quality-of-Life (IBS-QoL) questionnaire overall score at week 12 or (2) one of the top two responses (moderately/significantly relieved) on a 7-point IBS symptom relief question in ≥ 2 of 3 months. Patients who do not respond to therapy are assumed to fail therapy and accrue costs associated with a treatment failure. Model time horizon is aligned with clinical trial duration of 12 weeks. Model outputs include number of responders, quality-adjusted life-years (QALYs), and total costs (including direct and indirect). Both one-way and probabilistic sensitivity analyses were conducted. Treatment for IBS-C with linaclotide produced more responders than lubiprostone for both response definitions (19.3% vs 13.0% and 61.8% vs 57.2% for IBS-QoL and symptom relief, respectively), lower per-patient costs ($803 vs $911 and $977 vs $1056), and higher QALYs (0.1921 vs 0.1917 and 0.1909 vs 0.1894) over the 12-week time horizon. Results were similar for most one-way sensitivity analyses. In probabilistic sensitivity analyses, the majority of simulations resulted in linaclotide having higher treatment response rates and lower per-patient costs. There are no available head-to-head trials that compare linaclotide with lubiprostone; therefore, placebo-adjusted estimates of relative efficacy were derived for model inputs. The time horizon for this model is relatively short, as it was limited to the duration of available clinical trial data. Linaclotide was found to be a less costly option vs lubiprostone for the treatment of adult patients with IBS-C.

  5. Cost-effectiveness of breast cancer screening using mammography in Vietnamese women

    PubMed Central

    2018-01-01

    Background The incidence rate of breast cancer is increasing and has become the most common cancer in Vietnamese women while the survival rate is lower than that of developed countries. Early detection to improve breast cancer survival as well as reducing risk factors remains the cornerstone of breast cancer control according to the World Health Organization (WHO). This study aims to evaluate the costs and outcomes of introducing a mammography screening program for Vietnamese women aged 45–64 years, compared to the current situation of no screening. Methods Decision analytical modeling using Markov chain analysis was used to estimate costs and health outcomes over a lifetime horizon. Model inputs were derived from published literature and the results were reported as incremental cost-effectiveness ratios (ICERs) and/or incremental net monetary benefits (INMBs). One-way sensitivity analyses and probabilistic sensitivity analyses were performed to assess parameter uncertainty. Results The ICER per life year gained of the first round of mammography screening was US$3647.06 and US$4405.44 for women aged 50–54 years and 55–59 years, respectively. In probabilistic sensitivity analyses, mammography screening in the 50–54 age group and the 55–59 age group were cost-effective in 100% of cases at a threshold of three times the Vietnamese Gross Domestic Product (GDP) i.e., US$6332.70. However, less than 50% of the cases in the 60–64 age group and 0% of the cases in the 45–49 age group were cost effective at the WHO threshold. The ICERs were sensitive to the discount rate, mammography sensitivity, and transition probability from remission to distant recurrence in stage II for all age groups. Conclusion From the healthcare payer viewpoint, offering the first round of mammography screening to Vietnamese women aged 50–59 years should be considered, with the given threshold of three times the Vietnamese GDP per capita. PMID:29579131

  6. Cost-effectiveness of training rural providers to identify and treat patients at risk for fragility fractures.

    PubMed

    Nelson, S D; Nelson, R E; Cannon, G W; Lawrence, P; Battistone, M J; Grotzke, M; Rosenblum, Y; LaFleur, J

    2014-12-01

    This is a cost-effectiveness analysis of training rural providers to identify and treat osteoporosis. Results showed a slight cost savings, increase in life years, increase in treatment rates, and decrease in fracture incidence. However, the results were sensitive to small differences in effectiveness, being cost-effective in 70 % of simulations during probabilistic sensitivity analysis. We evaluated the cost-effectiveness of training rural providers to identify and treat veterans at risk for fragility fractures relative to referring these patients to an urban medical center for specialist care. The model evaluated the impact of training on patient life years, quality-adjusted life years (QALYs), treatment rates, fracture incidence, and costs from the perspective of the Department of Veterans Affairs. We constructed a Markov microsimulation model to compare costs and outcomes of a hypothetical cohort of veterans seen by rural providers. Parameter estimates were derived from previously published studies, and we conducted one-way and probabilistic sensitivity analyses on the parameter inputs. Base-case analysis showed that training resulted in no additional costs and an extra 0.083 life years (0.054 QALYs). Our model projected that as a result of training, more patients with osteoporosis would receive treatment (81.3 vs. 12.2 %), and all patients would have a lower incidence of fractures per 1,000 patient years (hip, 1.628 vs. 1.913; clinical vertebral, 0.566 vs. 1.037) when seen by a trained provider compared to an untrained provider. Results remained consistent in one-way sensitivity analysis and in probabilistic sensitivity analyses, training rural providers was cost-effective (less than $50,000/QALY) in 70 % of the simulations. Training rural providers to identify and treat veterans at risk for fragility fractures has a potential to be cost-effective, but the results are sensitive to small differences in effectiveness. It appears that provider education alone is not enough to make a significant difference in fragility fracture rates among veterans.

  7. Cost-effectiveness of EOB-MRI for Hepatocellular Carcinoma in Japan.

    PubMed

    Nishie, Akihiro; Goshima, Satoshi; Haradome, Hiroki; Hatano, Etsuro; Imai, Yasuharu; Kudo, Masatoshi; Matsuda, Masanori; Motosugi, Utaroh; Saitoh, Satoshi; Yoshimitsu, Kengo; Crawford, Bruce; Kruger, Eliza; Ball, Graeme; Honda, Hiroshi

    2017-04-01

    The objective of the study was to evaluate the cost-effectiveness of gadoxetic acid-enhanced magnetic resonance imaging (EOB-MRI) in the diagnosis and treatment of hepatocellular carcinoma (HCC) in Japan compared with extracellular contrast media-enhanced MRI (ECCM-MRI) and contrast media-enhanced computed tomography (CE-CT) scanning. A 6-stage Markov model was developed to estimate lifetime direct costs and clinical outcomes associated with EOB-MRI. Diagnostic sensitivity and specificity, along with clinical data on HCC survival, recurrence, treatment patterns, costs, and health state utility values, were derived from predominantly Japanese publications. Parameters unavailable from publications were estimated in a Delphi panel of Japanese clinical experts who also confirmed the structure and overall approach of the model. Sensitivity analyses, including one-way, probabilistic, and scenario analyses, were conducted to account for uncertainty in the results. Over a lifetime horizon, EOB-MRI was associated with lower direct costs (¥2,174,869) and generated a greater number of quality-adjusted life years (QALYs) (9.502) than either ECCM-MRI (¥2,365,421, 9.303 QALYs) or CE-CT (¥2,482,608, 9.215 QALYs). EOB-MRI was superior to the other diagnostic strategies considered, and this finding was robust over sensitivity and scenario analyses. A majority of the direct costs associated with HCC in Japan were found to be costs of treatment. The model results revealed the superior cost-effectiveness of the EOB-MRI diagnostic strategy compared with ECCM-MRI and CE-CT. EOB-MRI could be the first-choice imaging modality for medical care of HCC among patients with hepatitis or liver cirrhosis in Japan. Widespread implementation of EOB-MRI could reduce health care expenditures, particularly downstream treatment costs, associated with HCC. Copyright © 2017 Elsevier HS Journals, Inc. All rights reserved.

  8. Assessing vaccination as a control strategy in an ongoing epidemic: Bovine tuberculosis in African buffalo

    USGS Publications Warehouse

    Cross, Paul C.; Getz, W.M.

    2006-01-01

    Bovine tuberculosis (BTB) is an exotic disease invading the buffalo population (Syncerus caffer) of the Kruger National Park (KNP), South Africa. We used a sex and age-structured epidemiological model to assess the effectiveness of a vaccination program and define important research directions. The model allows for dispersal between a focal herd and background population and was parameterized with a combination of published data and analyses of over 130 radio-collared buffalo in the central region of the KNP. Radio-tracking data indicated that all sex and age categories move between mixed herds, and males over 8 years old had higher mortality and dispersal rates than any other sex or age category. In part due to the high dispersal rates of buffalo, sensitivity analyses indicate that disease prevalence in the background population accounts for the most variability in the BTB prevalence and quasi-eradication within the focal herd. Vaccination rate and the transmission coefficient were the second and third most important parameters of the sensitivity analyses. Further analyses of the model without dispersal suggest that the amount of vaccination necessary for quasi-eradication (i.e. prevalence < 5%) depends upon the duration that a vaccine grants protection. Vaccination programs are more efficient (i.e. fewer wasted doses) when they focus on younger individuals. However, even with a lifelong vaccine and a closed population, the model suggests that >70% of the calf population would have to be vaccinated every year to reduce the prevalence to less than 1%. If the half-life of the vaccine is less than 5 years, even vaccinating every calf for 50 years may not eradicate BTB. Thus, although vaccination provides a means of controlling BTB prevalence it should be combined with other control measures if eradication is the objective.

  9. Estimating Mass Properties of Dinosaurs Using Laser Imaging and 3D Computer Modelling

    PubMed Central

    Bates, Karl T.; Manning, Phillip L.; Hodgetts, David; Sellers, William I.

    2009-01-01

    Body mass reconstructions of extinct vertebrates are most robust when complete to near-complete skeletons allow the reconstruction of either physical or digital models. Digital models are most efficient in terms of time and cost, and provide the facility to infinitely modify model properties non-destructively, such that sensitivity analyses can be conducted to quantify the effect of the many unknown parameters involved in reconstructions of extinct animals. In this study we use laser scanning (LiDAR) and computer modelling methods to create a range of 3D mass models of five specimens of non-avian dinosaur; two near-complete specimens of Tyrannosaurus rex, the most complete specimens of Acrocanthosaurus atokensis and Strutiomimum sedens, and a near-complete skeleton of a sub-adult Edmontosaurus annectens. LiDAR scanning allows a full mounted skeleton to be imaged resulting in a detailed 3D model in which each bone retains its spatial position and articulation. This provides a high resolution skeletal framework around which the body cavity and internal organs such as lungs and air sacs can be reconstructed. This has allowed calculation of body segment masses, centres of mass and moments or inertia for each animal. However, any soft tissue reconstruction of an extinct taxon inevitably represents a best estimate model with an unknown level of accuracy. We have therefore conducted an extensive sensitivity analysis in which the volumes of body segments and respiratory organs were varied in an attempt to constrain the likely maximum plausible range of mass parameters for each animal. Our results provide wide ranges in actual mass and inertial values, emphasizing the high level of uncertainty inevitable in such reconstructions. However, our sensitivity analysis consistently places the centre of mass well below and in front of hip joint in each animal, regardless of the chosen combination of body and respiratory structure volumes. These results emphasize that future biomechanical assessments of extinct taxa should be preceded by a detailed investigation of the plausible range of mass properties, in which sensitivity analyses are used to identify a suite of possible values to be tested as inputs in analytical models. PMID:19225569

  10. Estimating mass properties of dinosaurs using laser imaging and 3D computer modelling.

    PubMed

    Bates, Karl T; Manning, Phillip L; Hodgetts, David; Sellers, William I

    2009-01-01

    Body mass reconstructions of extinct vertebrates are most robust when complete to near-complete skeletons allow the reconstruction of either physical or digital models. Digital models are most efficient in terms of time and cost, and provide the facility to infinitely modify model properties non-destructively, such that sensitivity analyses can be conducted to quantify the effect of the many unknown parameters involved in reconstructions of extinct animals. In this study we use laser scanning (LiDAR) and computer modelling methods to create a range of 3D mass models of five specimens of non-avian dinosaur; two near-complete specimens of Tyrannosaurus rex, the most complete specimens of Acrocanthosaurus atokensis and Strutiomimum sedens, and a near-complete skeleton of a sub-adult Edmontosaurus annectens. LiDAR scanning allows a full mounted skeleton to be imaged resulting in a detailed 3D model in which each bone retains its spatial position and articulation. This provides a high resolution skeletal framework around which the body cavity and internal organs such as lungs and air sacs can be reconstructed. This has allowed calculation of body segment masses, centres of mass and moments or inertia for each animal. However, any soft tissue reconstruction of an extinct taxon inevitably represents a best estimate model with an unknown level of accuracy. We have therefore conducted an extensive sensitivity analysis in which the volumes of body segments and respiratory organs were varied in an attempt to constrain the likely maximum plausible range of mass parameters for each animal. Our results provide wide ranges in actual mass and inertial values, emphasizing the high level of uncertainty inevitable in such reconstructions. However, our sensitivity analysis consistently places the centre of mass well below and in front of hip joint in each animal, regardless of the chosen combination of body and respiratory structure volumes. These results emphasize that future biomechanical assessments of extinct taxa should be preceded by a detailed investigation of the plausible range of mass properties, in which sensitivity analyses are used to identify a suite of possible values to be tested as inputs in analytical models.

  11. Comparison between Deflection and Vibration Characteristics of Rectangular and Trapezoidal profile Microcantilevers

    PubMed Central

    Ansari, Mohd. Zahid; Cho, Chongdu; Kim, Jooyong; Bang, Booun

    2009-01-01

    Arrays of microcantilevers are increasingly being used as physical, biological, and chemical sensors in various applications. To improve the sensitivity of microcantilever sensors, this study analyses and compares the deflection and vibration characteristics of rectangular and trapezoidal profile microcantilevers. Three models of each profile are investigated. The cantilevers are analyzed for maximum deflection, fundamental resonant frequency and maximum stress. The surface stress is modelled as in-plane tensile force applied on the top edge of the microcantilevers. A commercial finite element analysis software ANSYS is used to analyze the designs. Results show paddled trapezoidal profile microcantilevers have better sensitivity. PMID:22574041

  12. Understanding the Day Cent model: Calibration, sensitivity, and identifiability through inverse modeling

    USGS Publications Warehouse

    Necpálová, Magdalena; Anex, Robert P.; Fienen, Michael N.; Del Grosso, Stephen J.; Castellano, Michael J.; Sawyer, John E.; Iqbal, Javed; Pantoja, Jose L.; Barker, Daniel W.

    2015-01-01

    The ability of biogeochemical ecosystem models to represent agro-ecosystems depends on their correct integration with field observations. We report simultaneous calibration of 67 DayCent model parameters using multiple observation types through inverse modeling using the PEST parameter estimation software. Parameter estimation reduced the total sum of weighted squared residuals by 56% and improved model fit to crop productivity, soil carbon, volumetric soil water content, soil temperature, N2O, and soil3NO− compared to the default simulation. Inverse modeling substantially reduced predictive model error relative to the default model for all model predictions, except for soil 3NO− and 4NH+. Post-processing analyses provided insights into parameter–observation relationships based on parameter correlations, sensitivity and identifiability. Inverse modeling tools are shown to be a powerful way to systematize and accelerate the process of biogeochemical model interrogation, improving our understanding of model function and the underlying ecosystem biogeochemical processes that they represent.

  13. PeTTSy: a computational tool for perturbation analysis of complex systems biology models.

    PubMed

    Domijan, Mirela; Brown, Paul E; Shulgin, Boris V; Rand, David A

    2016-03-10

    Over the last decade sensitivity analysis techniques have been shown to be very useful to analyse complex and high dimensional Systems Biology models. However, many of the currently available toolboxes have either used parameter sampling, been focused on a restricted set of model observables of interest, studied optimisation of a objective function, or have not dealt with multiple simultaneous model parameter changes where the changes can be permanent or temporary. Here we introduce our new, freely downloadable toolbox, PeTTSy (Perturbation Theory Toolbox for Systems). PeTTSy is a package for MATLAB which implements a wide array of techniques for the perturbation theory and sensitivity analysis of large and complex ordinary differential equation (ODE) based models. PeTTSy is a comprehensive modelling framework that introduces a number of new approaches and that fully addresses analysis of oscillatory systems. It examines sensitivity analysis of the models to perturbations of parameters, where the perturbation timing, strength, length and overall shape can be controlled by the user. This can be done in a system-global setting, namely, the user can determine how many parameters to perturb, by how much and for how long. PeTTSy also offers the user the ability to explore the effect of the parameter perturbations on many different types of outputs: period, phase (timing of peak) and model solutions. PeTTSy can be employed on a wide range of mathematical models including free-running and forced oscillators and signalling systems. To enable experimental optimisation using the Fisher Information Matrix it efficiently allows one to combine multiple variants of a model (i.e. a model with multiple experimental conditions) in order to determine the value of new experiments. It is especially useful in the analysis of large and complex models involving many variables and parameters. PeTTSy is a comprehensive tool for analysing large and complex models of regulatory and signalling systems. It allows for simulation and analysis of models under a variety of environmental conditions and for experimental optimisation of complex combined experiments. With its unique set of tools it makes a valuable addition to the current library of sensitivity analysis toolboxes. We believe that this software will be of great use to the wider biological, systems biology and modelling communities.

  14. Sensitivities of Greenland ice sheet volume inferred from an ice sheet adjoint model

    NASA Astrophysics Data System (ADS)

    Heimbach, P.; Bugnion, V.

    2009-04-01

    We present a new and original approach to understanding the sensitivity of the Greenland ice sheet to key model parameters and environmental conditions. At the heart of this approach is the use of an adjoint ice sheet model. Since its introduction by MacAyeal (1992), the adjoint method has become widespread to fit ice stream models to the increasing number and diversity of satellite observations, and to estimate uncertain model parameters such as basal conditions. However, no attempt has been made to extend this method to comprehensive ice sheet models. As a first step toward the use of adjoints of comprehensive three-dimensional ice sheet models we have generated an adjoint of the ice sheet model SICOPOLIS of Greve (1997). The adjoint was generated by means of the automatic differentiation (AD) tool TAF. The AD tool generates exact source code representing the tangent linear and adjoint model of the nonlinear parent model provided. Model sensitivities are given by the partial derivatives of a scalar-valued model diagnostic with respect to the controls, and can be efficiently calculated via the adjoint. By way of example, we determine the sensitivity of the total Greenland ice volume to various control variables, such as spatial fields of basal flow parameters, surface and basal forcings, and initial conditions. Reliability of the adjoint was tested through finite-difference perturbation calculations for various control variables and perturbation regions. Besides confirming qualitative aspects of ice sheet sensitivities, such as expected regional variations, we detect regions where model sensitivities are seemingly unexpected or counter-intuitive, albeit ``real'' in the sense of actual model behavior. An example is inferred regions where sensitivities of ice sheet volume to basal sliding coefficient are positive, i.e. where a local increase in basal sliding parameter increases the ice sheet volume. Similarly, positive ice temperature sensitivities in certain parts of the ice sheet are found (in most regions it is negativ, i.e. an increase in temperature decreases ice sheet volume), the detection of which seems highly unlikely if only conventional perturbation experiments had been used. An effort to generate an efficient adjoint with the newly developed open-source AD tool OpenAD is also under way. Available adjoint code generation tools now open up a variety of novel model applications, notably with regard to sensitivity and uncertainty analyses and ice sheet state estimation or data assimilation.

  15. Confronting ‘confounding by health system use’ in Medicare Part D: Comparative effectiveness of propensity score approaches to confounding adjustment

    PubMed Central

    Polinski, Jennifer M.; Schneeweiss, Sebastian; Glynn, Robert J.; Lii, Joyce; Rassen, Jeremy

    2012-01-01

    Purpose Under Medicare Part D, patient characteristics influence plan choice, which in turn influences Part D coverage gap entry. We compared pre-defined propensity score (PS) and high-dimensional propensity score (hdPS) approaches to address such ‘confounding by health system use’ in assessing whether coverage gap entry is associated with cardiovascular events or death. Methods We followed 243,079 Medicare patients aged 65+ with linked prescription, medical, and plan-specific data in 2005–2007. Patients reached the coverage gap and were followed until an event or year’s end. Exposed patients were responsible for drug costs in the gap; unexposed patients (patients with non-Part D drug insurance and Part D patients receiving a low-income subsidy (LIS)) received financial assistance. Exposed patients were 1:1 PS- or hdPS-matched to unexposed patients. The PS model included 52 predefined covariates; the hdPS model added 400 empirically identified covariates. Hazard ratios for death and any of five cardiovascular outcomes were compared. In sensitivity analyses, we explored residual confounding using only LIS patients in the unexposed group. Results In unadjusted analyses, exposed patients had no greater hazard of death (HR=1.00; 95% CI, 0.84–1.20) or other outcomes. PS- (HR=1.29;0.99–1.66) and hdPS- (HR=1.11;0.86–1.42) matched analyses showed elevated but non-significant hazards of death. In sensitivity analyses, the PS analysis showed a protective effect (HR=0.78;0.61–0.98), while the hdPS analysis (HR=1.06;0.82–1.37) confirmed the main hdPS findings. Conclusion Although the PS-matched analysis suggested elevated though non-significant hazards of death among patients with no financial assistance during the gap, the hdPS analysis produced lower estimates that were stable across sensitivity analyses. PMID:22552984

  16. Cost-effectiveness of workplace wellness to prevent cardiovascular events among U.S. firefighters.

    PubMed

    Patterson, P Daniel; Smith, Kenneth J; Hostler, David

    2016-11-21

    The leading cause of death among firefighters in the United States (U.S.) is cardiovascular events (CVEs) such as sudden cardiac arrest and myocardial infarction. This study compared the cost-effectiveness of three strategies to prevent CVEs among firefighters. We used a cost-effectiveness analysis model with published observational and clinical data, and cost quotes for physiologic monitoring devices to determine the cost-effectiveness of three CVE prevention strategies. We adopted the fire department administrator perspective and varied parameter estimates in one-way and two-way sensitivity analyses. A wellness-fitness program prevented 10% of CVEs, for an event rate of 0.9% at $1440 over 10-years, or an incremental cost-effectiveness ratio of $1.44 million per CVE prevented compared to no program. In one-way sensitivity analyses, monitoring was favored if costs were < $116/year. In two-way sensitivity analyses, monitoring was not favored if cost was ≥ $399/year. A wellness-fitness program was not favored if its preventive relative risk was >0.928. Wellness-fitness programs may be a cost-effective solution to preventing CVE among firefighters compared to real-time physiologic monitoring or doing nothing.

  17. ATLAS Run 1 searches for direct pair production of third-generation squarks at the Large Hadron Collider

    DOE PAGES

    Aad, G.; Abbott, B.; Abdallah, J.; ...

    2015-10-29

    This paper reviews and extends searches for the direct pair production of the scalar supersymmetric partners of the top and bottom quarks in proton–proton collisions collected by the ATLAS collaboration during the LHC Run 1. Most of the analyses use 20 fb -1 of collisions at a centre-of-mass energy of √s = 8 TeV, although in some case an additional 4.7 fb -1 of collision data at √s = 7 TeV are used. New analyses are introduced to improve the sensitivity to specific regions of the model parameter space. Since no evidence of third-generation squarks is found, exclusion limits aremore » derived by combining several analyses and are presented in both a simplified model framework, assuming simple decay chains, as well as within the context of more elaborate phenomenological supersymmetric models.« less

  18. Sensitivity of snowpack storage to precipitation and temperature using spatial and temporal analog models

    NASA Astrophysics Data System (ADS)

    Luce, Charles H.; Lopez-Burgos, Viviana; Holden, Zachary

    2014-12-01

    Empirical sensitivity analyses are important for evaluation of the effects of a changing climate on water resources and ecosystems. Although mechanistic models are commonly applied for evaluation of climate effects for snowmelt, empirical relationships provide a first-order validation of the various postulates required for their implementation. Previous studies of empirical sensitivity for April 1 snow water equivalent (SWE) in the western United States were developed by regressing interannual variations in SWE to winter precipitation and temperature. This offers a temporal analog for climate change, positing that a warmer future looks like warmer years. Spatial analogs are used to hypothesize that a warmer future may look like warmer places, and are frequently applied alternatives for complex processes, or states/metrics that show little interannual variability (e.g., forest cover). We contrast spatial and temporal analogs for sensitivity of April 1 SWE and the mean residence time of snow (SRT) using data from 524 Snowpack Telemetry (SNOTEL) stations across the western U.S. We built relatively strong models using spatial analogs to relate temperature and precipitation climatology to snowpack climatology (April 1 SWE, R2=0.87, and SRT, R2=0.81). Although the poorest temporal analog relationships were in areas showing the highest sensitivity to warming, spatial analog models showed consistent performance throughout the range of temperature and precipitation. Generally, slopes from the spatial relationships showed greater thermal sensitivity than the temporal analogs, and high elevation stations showed greater vulnerability using a spatial analog than shown in previous modeling and sensitivity studies. The spatial analog models provide a simple perspective to evaluate potential futures and may be useful in further evaluation of snowpack with warming.

  19. Mechanisms of change in cognitive behavioral therapy for panic disorder: The unique effects of self-efficacy and anxiety sensitivity

    PubMed Central

    Gallagher, Matthew W.; Payne, Laura A.; White, Kamila S.; Shear, Katherine M.; Woods, Scott W.; Gorman, Jack M.; Barlow, David H.

    2013-01-01

    The present study examined temporal dependencies of change of panic symptoms and two promising mechanisms of change (self-efficacy and anxiety sensitivity) during an 11-session course of cognitive-behavior therapy (CBT) for Panic Disorder (PD). 361 individuals with a principal diagnosis of PD completed measures of self-efficacy, anxiety sensitivity, and PD symptoms at each session during treatment. Effect size analyses indicated that the greatest changes in anxiety sensitivity occurred early in treatment, whereas the greatest changes in self-efficacy occurred later in treatment. Results of parallel process latent growth curve models indicated that changes in self-efficacy and anxiety sensitivity across treatment uniquely predicted changes in PD symptoms. Bivariate and multivariate latent difference score models indicated, as expected, that changes in anxiety sensitivity and self-efficacy temporally preceded changes in panic symptoms, and that intraindividual changes in anxiety sensitivity and self-efficacy independently predicted subsequent intraindividual changes in panic symptoms. These results provide strong evidence that changes in self-efficacy and anxiety sensitivity during CBT influence subsequent changes in panic symptoms, and that self-efficacy and anxiety sensitivity may therefore be two distinct mechanisms of change of CBT for PD that have their greatest impact at different stages of treatment. PMID:24095901

  20. Ocean data assimilation using optimal interpolation with a quasi-geostrophic model

    NASA Technical Reports Server (NTRS)

    Rienecker, Michele M.; Miller, Robert N.

    1991-01-01

    A quasi-geostrophic (QG) stream function is analyzed by optimal interpolation (OI) over a 59-day period in a 150-km-square domain off northern California. Hydrographic observations acquired over five surveys were assimilated into a QG open boundary ocean model. Assimilation experiments were conducted separately for individual surveys to investigate the sensitivity of the OI analyses to parameters defining the decorrelation scale of an assumed error covariance function. The analyses were intercompared through dynamical hindcasts between surveys. The best hindcast was obtained using the smooth analyses produced with assumed error decorrelation scales identical to those of the observed stream function. The rms difference between the hindcast stream function and the final analysis was only 23 percent of the observation standard deviation. The two sets of OI analyses were temporally smoother than the fields from statistical objective analysis and in good agreement with the only independent data available for comparison.

  1. Getting what you want: How fit between desired and received leader sensitivity influences emotion and counterproductive work behavior.

    PubMed

    Rupprecht, Elizabeth A; Kueny, Clair Reynolds; Shoss, Mindy K; Metzger, Andrew J

    2016-10-01

    We challenge the intuitive belief that greater leader sensitivity is always associated with desirable outcomes for employees and organizations. Specifically, we argue that followers' idiosyncratic desires for, and perceptions of, leader sensitivity behaviors play a key role in how followers react to their leader's sensitivity. Moreover, these resulting affective experiences are likely to have important consequences for organizations, specifically as they relate to employee counterproductive work behavior (CWB). Drawing from supplies-values (S-V) fit theory and the stressor-emotion model of CWB, the current study focuses on the affective and behavioral consequences of fit between subordinates' ideal leader sensitivity behavior preferences and subordinates' perceptions of their actual leader's sensitivity behaviors. Polynomial regression analyses reveal that congruence between ideal and actual leader sensitivity influences employee negative affect and, consequently, engagement in counterproductive work behavior. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  2. GetReal in mathematical modelling: a review of studies predicting drug effectiveness in the real world.

    PubMed

    Panayidou, Klea; Gsteiger, Sandro; Egger, Matthias; Kilcher, Gablu; Carreras, Máximo; Efthimiou, Orestis; Debray, Thomas P A; Trelle, Sven; Hummel, Noemi

    2016-09-01

    The performance of a drug in a clinical trial setting often does not reflect its effect in daily clinical practice. In this third of three reviews, we examine the applications that have been used in the literature to predict real-world effectiveness from randomized controlled trial efficacy data. We searched MEDLINE, EMBASE from inception to March 2014, the Cochrane Methodology Register, and websites of key journals and organisations and reference lists. We extracted data on the type of model and predictions, data sources, validation and sensitivity analyses, disease area and software. We identified 12 articles in which four approaches were used: multi-state models, discrete event simulation models, physiology-based models and survival and generalized linear models. Studies predicted outcomes over longer time periods in different patient populations, including patients with lower levels of adherence or persistence to treatment or examined doses not tested in trials. Eight studies included individual patient data. Seven examined cardiovascular and metabolic diseases and three neurological conditions. Most studies included sensitivity analyses, but external validation was performed in only three studies. We conclude that mathematical modelling to predict real-world effectiveness of drug interventions is not widely used at present and not well validated. © 2016 The Authors Research Synthesis Methods Published by John Wiley & Sons Ltd. © 2016 The Authors Research Synthesis Methods Published by John Wiley & Sons Ltd.

  3. Sensitivity analysis of a coupled hydrodynamic-vegetation model using the effectively subsampled quadratures method

    USGS Publications Warehouse

    Kalra, Tarandeep S.; Aretxabaleta, Alfredo; Seshadri, Pranay; Ganju, Neil K.; Beudin, Alexis

    2017-01-01

    Coastal hydrodynamics can be greatly affected by the presence of submerged aquatic vegetation. The effect of vegetation has been incorporated into the Coupled-Ocean-Atmosphere-Wave-Sediment Transport (COAWST) Modeling System. The vegetation implementation includes the plant-induced three-dimensional drag, in-canopy wave-induced streaming, and the production of turbulent kinetic energy by the presence of vegetation. In this study, we evaluate the sensitivity of the flow and wave dynamics to vegetation parameters using Sobol' indices and a least squares polynomial approach referred to as Effective Quadratures method. This method reduces the number of simulations needed for evaluating Sobol' indices and provides a robust, practical, and efficient approach for the parameter sensitivity analysis. The evaluation of Sobol' indices shows that kinetic energy, turbulent kinetic energy, and water level changes are affected by plant density, height, and to a certain degree, diameter. Wave dissipation is mostly dependent on the variation in plant density. Performing sensitivity analyses for the vegetation module in COAWST provides guidance for future observational and modeling work to optimize efforts and reduce exploration of parameter space.

  4. Mixed kernel function support vector regression for global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng

    2017-11-01

    Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.

  5. BEATBOX v1.0: Background Error Analysis Testbed with Box Models

    NASA Astrophysics Data System (ADS)

    Knote, Christoph; Barré, Jérôme; Eckl, Max

    2018-02-01

    The Background Error Analysis Testbed (BEATBOX) is a new data assimilation framework for box models. Based on the BOX Model eXtension (BOXMOX) to the Kinetic Pre-Processor (KPP), this framework allows users to conduct performance evaluations of data assimilation experiments, sensitivity analyses, and detailed chemical scheme diagnostics from an observation simulation system experiment (OSSE) point of view. The BEATBOX framework incorporates an observation simulator and a data assimilation system with the possibility of choosing ensemble, adjoint, or combined sensitivities. A user-friendly, Python-based interface allows for the tuning of many parameters for atmospheric chemistry and data assimilation research as well as for educational purposes, for example observation error, model covariances, ensemble size, perturbation distribution in the initial conditions, and so on. In this work, the testbed is described and two case studies are presented to illustrate the design of a typical OSSE experiment, data assimilation experiments, a sensitivity analysis, and a method for diagnosing model errors. BEATBOX is released as an open source tool for the atmospheric chemistry and data assimilation communities.

  6. Dissecting effects of complex mixtures: who's afraid of informative priors?

    PubMed

    Thomas, Duncan C; Witte, John S; Greenland, Sander

    2007-03-01

    Epidemiologic studies commonly investigate multiple correlated exposures, which are difficult to analyze appropriately. Hierarchical modeling provides a promising approach for analyzing such data by adding a higher-level structure or prior model for the exposure effects. This prior model can incorporate additional information on similarities among the correlated exposures and can be parametric, semiparametric, or nonparametric. We discuss the implications of applying these models and argue for their expanded use in epidemiology. While a prior model adds assumptions to the conventional (first-stage) model, all statistical methods (including conventional methods) make strong intrinsic assumptions about the processes that generated the data. One should thus balance prior modeling assumptions against assumptions of validity, and use sensitivity analyses to understand their implications. In doing so - and by directly incorporating into our analyses information from other studies or allied fields - we can improve our ability to distinguish true causes of disease from noise and bias.

  7. Assessing vaccination as a control strategy in an ongoing epidemic: Bovine tuberculosis in African buffalo

    USGS Publications Warehouse

    Cross, P.C.; Getz, W.M.

    2006-01-01

    Bovine tuberculosis (BTB) is an exotic disease invading the buffalo population (Syncerus caffer) of the Kruger National Park (KNP), South Africa. We used a sex and age-structured epidemiological model to assess the effectiveness of a vaccination program and define important research directions. The model allows for dispersal between a focal herd and background population and was parameterized with a combination of published data and analyses of over 130 radio-collared buffalo in the central region of the KNP. Radio-tracking data indicated that all sex and age categories move between mixed herds, and males over 8 years old had higher mortality and dispersal rates than any other sex or age category. In part due to the high dispersal rates of buffalo, sensitivity analyses indicate that disease prevalence in the background population accounts for the most variability in the BTB prevalence and quasi-eradication within the focal herd. Vaccination rate and the transmission coefficient were the second and third most important parameters of the sensitivity analyses. Further analyses of the model without dispersal suggest that the amount of vaccination necessary for quasi-eradication (i.e. prevalence 70% of the calf population would have to be vaccinated every year to reduce the prevalence to less than 1%. If the half-life of the vaccine is less than 5 years, even vaccinating every calf for 50 years may not eradicate BTB. Thus, although vaccination provides a means of controlling BTB prevalence it should be combined with other control measures if eradication is the objective.

  8. Economic evaluation in chronic pain: a systematic review and de novo flexible economic model.

    PubMed

    Sullivan, W; Hirst, M; Beard, S; Gladwell, D; Fagnani, F; López Bastida, J; Phillips, C; Dunlop, W C N

    2016-07-01

    There is unmet need in patients suffering from chronic pain, yet innovation may be impeded by the difficulty of justifying economic value in a field beset by data limitations and methodological variability. A systematic review was conducted to identify and summarise the key areas of variability and limitations in modelling approaches in the economic evaluation of treatments for chronic pain. The results of the literature review were then used to support the development of a fully flexible open-source economic model structure, designed to test structural and data assumptions and act as a reference for future modelling practice. The key model design themes identified from the systematic review included: time horizon; titration and stabilisation; number of treatment lines; choice/ordering of treatment; and the impact of parameter uncertainty (given reliance on expert opinion). Exploratory analyses using the model to compare a hypothetical novel therapy versus morphine as first-line treatments showed cost-effectiveness results to be sensitive to structural and data assumptions. Assumptions about the treatment pathway and choice of time horizon were key model drivers. Our results suggest structural model design and data assumptions may have driven previous cost-effectiveness results and ultimately decisions based on economic value. We therefore conclude that it is vital that future economic models in chronic pain are designed to be fully transparent and hope our open-source code is useful in order to aspire to a common approach to modelling pain that includes robust sensitivity analyses to test structural and parameter uncertainty.

  9. Policy impacts estimates are sensitive to data selection in empirical analysis: evidence from the United States – Canada softwood lumber trade dispute

    Treesearch

    Daowei Zhang; Rajan Parajuli

    2016-01-01

    In this paper, we use the U.S. softwood lumber import demand model as a case study to show that the effects of past trade policies are sensitive to the data sample used in empirical analyses.  We conclude that, to be consistent with the purpose of analysis of policy and to ensure all else being equal, policy impacts can only be judged by using data up to the time when...

  10. Sensitivity studies and laboratory measurements for the laser heterodyne spectrometer experiment

    NASA Technical Reports Server (NTRS)

    Allario, F.; Katzberg, S. J.; Larsen, J. C.

    1980-01-01

    Several experiments involving spectral scanning interferometers and gas filter correlation radiometers (ref. 2) using limb scanning solar occultation techniques under development for measurements of stratospheric trace gases from Spacelab and satellite platforms are described. An experiment to measure stratospheric trace constituents by Laser Heterodyne Spectroscopy, a summary of sensitivity analyses, and supporting laboratory measurements are presented for O3, ClO, and H2O2 in which the instrument transfer function is modeled using a detailed optical receiver design.

  11. Space Station needs, attributes and architectural options. Volume 2, book 1, part 1: Mission requirements

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The baseline mission model used to develop the space station mission-related requirements is described as well as the 90 civil missions that were evaluated, (including the 62 missions that formed the baseline model). Mission-related requirements for the space station baseline are defined and related to space station architectural development. Mission-related sensitivity analyses are discussed.

  12. Probabilistic Analysis Techniques Applied to Complex Spacecraft Power System Modeling

    NASA Technical Reports Server (NTRS)

    Hojnicki, Jeffrey S.; Rusick, Jeffrey J.

    2005-01-01

    Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the complexity and large size of the model. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in model input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) complex structural analysis models. These same techniques were recently applied to large, complex ISS power system models. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system model, System Power Analysis for Capability Evaluation (SPACE).

  13. Boundary Layer Depth In Coastal Regions

    NASA Astrophysics Data System (ADS)

    Porson, A.; Schayes, G.

    The results of earlier studies performed about sea breezes simulations have shown that this is a relevant feature of the Planetary Boundary Layer that still requires effort to be diagnosed properly by atmospheric models. Based on the observations made during the ESCOMPTE campaign, over the Mediterranean Sea, different CBL and SBL height estimation processes have been tested with a meso-scale model, TVM. The aim was to compare the critical points of the BL height determination computed using turbulent kinetic energy profile with some other standard evaluations. Moreover, these results have been analysed with different mixing length formulation. The sensitivity of formulation is also analysed with a simple coastal configuration.

  14. Sensitivity of the ocean overturning circulation to wind and mixing: theoretical scalings and global ocean models

    NASA Astrophysics Data System (ADS)

    Nikurashin, Maxim; Gunn, Andrew

    2017-04-01

    The meridional overturning circulation (MOC) is a planetary-scale oceanic flow which is of direct importance to the climate system: it transports heat meridionally and regulates the exchange of CO2 with the atmosphere. The MOC is forced by wind and heat and freshwater fluxes at the surface and turbulent mixing in the ocean interior. A number of conceptual theories for the sensitivity of the MOC to changes in forcing have recently been developed and tested with idealized numerical models. However, the skill of the simple conceptual theories to describe the MOC simulated with higher complexity global models remains largely unknown. In this study, we present a systematic comparison of theoretical and modelled sensitivity of the MOC and associated deep ocean stratification to vertical mixing and southern hemisphere westerlies. The results show that theories that simplify the ocean into a single-basin, zonally-symmetric box are generally in a good agreement with a realistic, global ocean circulation model. Some disagreement occurs in the abyssal ocean, where complex bottom topography is not taken into account by simple theories. Distinct regimes, where the MOC has a different sensitivity to wind or mixing, as predicted by simple theories, are also clearly shown by the global ocean model. The sensitivity of the Indo-Pacific, Atlantic, and global basins is analysed separately to validate the conceptual understanding of the upper and lower overturning cells in the theory.

  15. Economic evaluation of strategies for restarting anticoagulation therapy after a first event of unprovoked venous thromboembolism.

    PubMed

    Monahan, M; Ensor, J; Moore, D; Fitzmaurice, D; Jowett, S

    2017-08-01

    Essentials Correct duration of treatment after a first unprovoked venous thromboembolism (VTE) is unknown. We assessed when restarting anticoagulation was worthwhile based on patient risk of recurrent VTE. When the risk over a one-year period is 17.5%, restarting is cost-effective. However, sensitivity analyses indicate large uncertainty in the estimates. Background Following at least 3 months of anticoagulation therapy after a first unprovoked venous thromboembolism (VTE), there is uncertainty about the duration of therapy. Further anticoagulation therapy reduces the risk of having a potentially fatal recurrent VTE but at the expense of a higher risk of bleeding, which can also be fatal. Objective An economic evaluation sought to estimate the long-term cost-effectiveness of using a decision rule for restarting anticoagulation therapy vs. no extension of therapy in patients based on their risk of a further unprovoked VTE. Methods A Markov patient-level simulation model was developed, which adopted a lifetime time horizon with monthly time cycles and was from a UK National Health Service (NHS)/Personal Social Services (PSS) perspective. Results Base-case model results suggest that treating patients with a predicted 1 year VTE risk of 17.5% or higher may be cost-effective if decision makers are willing to pay up to £20 000 per quality adjusted life year (QALY) gained. However, probabilistic sensitivity analysis shows that the model was highly sensitive to overall parameter uncertainty and caution is warranted in selecting the optimal decision rule on cost-effectiveness grounds. Univariate sensitivity analyses indicate variables such as anticoagulation therapy disutility and mortality risks were very influential in driving model results. Conclusion This represents the first economic model to consider the use of a decision rule for restarting therapy for unprovoked VTE patients. Better data are required to predict long-term bleeding risks during therapy in this patient group. © 2017 International Society on Thrombosis and Haemostasis.

  16. Historical clinical and economic consequences of anemia management in patients with end-stage renal disease on dialysis using erythropoietin stimulating agents versus routine blood transfusions: a retrospective cost-effectiveness analysis.

    PubMed

    Naci, Huseyin; de Lissovoy, Gregory; Hollenbeak, Christopher; Custer, Brian; Hofmann, Axel; McClellan, William; Gitlin, Matthew

    2012-01-01

    To determine whether Medicare's decision to cover routine administration of erythropoietin stimulating agents (ESAs) to treat anemia of end-stage renal disease (ESRD) has been a cost-effective policy relative to standard of care at the time. The authors used summary statistics from the actual cohort of ESRD patients receiving ESAs between 1995 and 2004 to create a simulated patient cohort, which was compared with a comparable simulated cohort assumed to rely solely on blood transfusions. Outcomes modeled from the Medicare perspective included estimated treatment costs, life-years gained, and quality-adjusted life-years (QALYs). Incremental cost-effectiveness ratio (ICER) was calculated relative to the hypothetical reference case of no ESA use in the transfusion cohort. Sensitivity of the results to model assumptions was tested using one-way and probabilistic sensitivity analyses. Estimated total costs incurred by the ESRD population were $155.47B for the cohort receiving ESAs and $155.22B for the cohort receiving routine blood transfusions. Estimated QALYs were 2.56M and 2.29M, respectively, for the two groups. The ICER of ESAs compared to routine blood transfusions was estimated as $873 per QALY gained. The model was sensitive to a number of parameters according to one-way and probabilistic sensitivity analyses. This model was counter-factual as the actual comparison group, whose anemia was managed via transfusion and iron supplements, rapidly disappeared following introduction of ESAs. In addition, a large number of model parameters were obtained from observational studies due to the lack of randomized trial evidence in the literature. This study indicates that Medicare's coverage of ESAs appears to have been cost effective based on commonly accepted levels of willingness-to-pay. The ESRD population achieved substantial clinical benefit at a reasonable cost to society.

  17. Using uncertainty and sensitivity analyses in socioecological agent-based models to improve their analytical performance and policy relevance.

    PubMed

    Ligmann-Zielinska, Arika; Kramer, Daniel B; Spence Cheruvelil, Kendra; Soranno, Patricia A

    2014-01-01

    Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.

  18. Using Uncertainty and Sensitivity Analyses in Socioecological Agent-Based Models to Improve Their Analytical Performance and Policy Relevance

    PubMed Central

    Ligmann-Zielinska, Arika; Kramer, Daniel B.; Spence Cheruvelil, Kendra; Soranno, Patricia A.

    2014-01-01

    Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system. PMID:25340764

  19. Hypoglycemia alarm enhancement using data fusion.

    PubMed

    Skladnev, Victor N; Tarnavskii, Stanislav; McGregor, Thomas; Ghevondian, Nejhdeh; Gourlay, Steve; Jones, Timothy W

    2010-01-01

    The acceptance of closed-loop blood glucose (BG) control using continuous glucose monitoring systems (CGMS) is likely to improve with enhanced performance of their integral hypoglycemia alarms. This article presents an in silico analysis (based on clinical data) of a modeled CGMS alarm system with trained thresholds on type 1 diabetes mellitus (T1DM) patients that is augmented by sensor fusion from a prototype hypoglycemia alarm system (HypoMon). This prototype alarm system is based on largely independent autonomic nervous system (ANS) response features. Alarm performance was modeled using overnight BG profiles recorded previously on 98 T1DM volunteers. These data included the corresponding ANS response features detected by HypoMon (AiMedics Pty. Ltd.) systems. CGMS data and alarms were simulated by applying a probabilistic model to these overnight BG profiles. The probabilistic model developed used a mean response delay of 7.1 minutes, measurement error offsets on each sample of +/- standard deviation (SD) = 4.5 mg/dl (0.25 mmol/liter), and vertical shifts (calibration offsets) of +/- SD = 19.8 mg/dl (1.1 mmol/liter). Modeling produced 90 to 100 simulated measurements per patient. Alarm systems for all analyses were optimized on a training set of 46 patients and evaluated on the test set of 56 patients. The split between the sets was based on enrollment dates. Optimization was based on detection accuracy but not time to detection for these analyses. The contribution of this form of data fusion to hypoglycemia alarm performance was evaluated by comparing the performance of the trained CGMS and fused data algorithms on the test set under the same evaluation conditions. The simulated addition of HypoMon data produced an improvement in CGMS hypoglycemia alarm performance of 10% at equal specificity. Sensitivity improved from 87% (CGMS as stand-alone measurement) to 97% for the enhanced alarm system. Specificity was maintained constant at 85%. Positive predictive values on the test set improved from 61 to 66% with negative predictive values improving from 96 to 99%. These enhancements were stable within sensitivity analyses. Sensitivity analyses also suggested larger performance increases at lower CGMS alarm performance levels. Autonomic nervous system response features provide complementary information suitable for fusion with CGMS data to enhance nocturnal hypoglycemia alarms. 2010 Diabetes Technology Society.

  20. Volcano deformation source parameters estimated from InSAR: Sensitivities to uncertainties in seismic tomography

    USGS Publications Warehouse

    Masterlark, Timothy; Donovan, Theodore; Feigl, Kurt L.; Haney, Matt; Thurber, Clifford H.; Tung, Sui

    2016-01-01

    The eruption cycle of a volcano is controlled in part by the upward migration of magma. The characteristics of the magma flux produce a deformation signature at the Earth's surface. Inverse analyses use geodetic data to estimate strategic controlling parameters that describe the position and pressurization of a magma chamber at depth. The specific distribution of material properties controls how observed surface deformation translates to source parameter estimates. Seismic tomography models describe the spatial distributions of material properties that are necessary for accurate models of volcano deformation. This study investigates how uncertainties in seismic tomography models propagate into variations in the estimates of volcano deformation source parameters inverted from geodetic data. We conduct finite element model-based nonlinear inverse analyses of interferometric synthetic aperture radar (InSAR) data for Okmok volcano, Alaska, as an example. We then analyze the estimated parameters and their uncertainties to characterize the magma chamber. Analyses are performed separately for models simulating a pressurized chamber embedded in a homogeneous domain as well as for a domain having a heterogeneous distribution of material properties according to seismic tomography. The estimated depth of the source is sensitive to the distribution of material properties. The estimated depths for the homogeneous and heterogeneous domains are 2666 ± 42 and 3527 ± 56 m below mean sea level, respectively (99% confidence). A Monte Carlo analysis indicates that uncertainties of the seismic tomography cannot account for this discrepancy at the 99% confidence level. Accounting for the spatial distribution of elastic properties according to seismic tomography significantly improves the fit of the deformation model predictions and significantly influences estimates for parameters that describe the location of a pressurized magma chamber.

  1. Anxiety sensitivity, catastrophic misinterpretations and panic self-efficacy in the prediction of panic disorder severity: towards a tripartite cognitive model of panic disorder.

    PubMed

    Sandin, Bonifacio; Sánchez-Arribas, Carmen; Chorot, Paloma; Valiente, Rosa M

    2015-04-01

    The present study examined the contribution of three main cognitive factors (i.e., anxiety sensitivity, catastrophic misinterpretations of bodily symptoms, and panic self-efficacy) in predicting panic disorder (PD) severity in a sample of patients with a principal diagnosis of panic disorder. It was hypothesized that anxiety sensitivity (AS), catastrophic misinterpretation of bodily sensations, and panic self-efficacy are uniquely related to panic disorder severity. One hundred and sixty-eight participants completed measures of AS, catastrophic misinterpretations of panic-like sensations, and panic self-efficacy prior to receiving treatment. Results of multiple linear regression analyses indicated that AS, catastrophic misinterpretations and panic self-efficacy independently predicted panic disorder severity. Results of path analyses indicated that AS was direct and indirectly (mediated by catastrophic misinterpretations) related with panic severity. Results provide evidence for a tripartite cognitive account of panic disorder. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Emulation and Sensitivity Analysis of the Community Multiscale Air Quality Model for a UK Ozone Pollution Episode.

    PubMed

    Beddows, Andrew V; Kitwiroon, Nutthida; Williams, Martin L; Beevers, Sean D

    2017-06-06

    Gaussian process emulation techniques have been used with the Community Multiscale Air Quality model, simulating the effects of input uncertainties on ozone and NO 2 output, to allow robust global sensitivity analysis (SA). A screening process ranked the effect of perturbations in 223 inputs, isolating the 30 most influential from emissions, boundary conditions (BCs), and reaction rates. Community Multiscale Air Quality (CMAQ) simulations of a July 2006 ozone pollution episode in the UK were made with input values for these variables plus ozone dry deposition velocity chosen according to a 576 point Latin hypercube design. Emulators trained on the output of these runs were used in variance-based SA of the model output to input uncertainties. Performing these analyses for every hour of a 21 day period spanning the episode and several days on either side allowed the results to be presented as a time series of sensitivity coefficients, showing how the influence of different input uncertainties changed during the episode. This is one of the most complex models to which these methods have been applied, and here, they reveal detailed spatiotemporal patterns of model sensitivities, with NO and isoprene emissions, NO 2 photolysis, ozone BCs, and deposition velocity being among the most influential input uncertainties.

  3. Modeled and observed ozone sensitivity to mobile-source emissions in Mexico City

    NASA Astrophysics Data System (ADS)

    Zavala, M.; Lei, W.; Molina, M. J.; Molina, L. T.

    2009-01-01

    The emission characteristics of mobile sources in the Mexico City Metropolitan Area (MCMA) have changed significantly over the past few decades in response to emission control policies, advancements in vehicle technologies and improvements in fuel quality, among others. Along with these changes, concurrent non-linear changes in photochemical levels and criteria pollutants have been observed, providing a unique opportunity to understand the effects of perturbations of mobile emission levels on the photochemistry in the region using observational and modeling approaches. The observed historical trends of ozone (O3), carbon monoxide (CO) and nitrogen oxides (NOx) suggest that ozone production in the MCMA has changed from a low to a high VOC-sensitive regime over a period of 20 years. Comparison of the historical emission trends of CO, NOx and hydrocarbons derived from mobile-source emission studies in the MCMA from 1991 to 2006 with the trends of the concentrations of CO, NOx, and the CO/NOx ratio during peak traffic hours also indicates that fuel-based fleet average emission factors have significantly decreased for CO and VOCs during this period whereas NOx emission factors do not show any strong trend, effectively reducing the ambient VOC/NOx ratio. This study presents the results of model analyses on the sensitivity of the observed ozone levels to the estimated historical changes in its precursors. The model sensitivity analyses used a well-validated base case simulation of a high pollution episode in the MCMA with the mathematical Decoupled Direct Method (DDM) and the standard Brute Force Method (BFM) in the 3-D CAMx chemical transport model. The model reproduces adequately the observed historical trends and current photochemical levels. Comparison of the BFM and the DDM sensitivity techniques indicates that the model yields ozone values that increase linearly with NOx emission reductions and decrease linearly with VOC emission reductions only up to 30% from the base case. We further performed emissions perturbations from the gasoline fleet, diesel fleet, all mobile (gasoline plus diesel) and all emission sources (anthropogenic plus biogenic). The results suggest that although large ozone reductions obtained in the past were from changes in emissions from gasoline vehicles, currently significant benefits could be achieved with additional emission control policies directed to regulation of VOC emissions from diesel and area sources that are high emitters of alkenes, aromatics and aldehydes.

  4. Modeled and observed ozone sensitivity to mobile-source emissions in Mexico City

    NASA Astrophysics Data System (ADS)

    Zavala, M.; Lei, W. F.; Molina, M. J.; Molina, L. T.

    2008-08-01

    The emission characteristics of mobile sources in the Mexico City Metropolitan Area (MCMA) have changed significantly over the past few decades in response to emission control policies, advancements in vehicle technologies and improvements in fuel quality, among others. Along with these changes, concurrent non-linear changes in photochemical levels and criteria pollutants have been observed, providing a unique opportunity to understand the effects of perturbations of mobile emission levels on the photochemistry in the region using observational and modeling approaches. The observed historical trends of ozone (O3), carbon monoxide (CO) and nitrogen oxides (NOx) suggest that ozone production in the MCMA has changed from a low to a high VOC-sensitive regime over a period of 20 years. Comparison of the historical emission trends of CO, NOx and hydrocarbons derived from mobile-source emission studies in the MCMA from 1991 to 2006 with the trends of the concentrations of CO, NOx, and the CO/NOx ratio during peak traffic hours also indicates that fuel-based fleet average emission factors have significantly decreased for CO and VOCs during this period whereas NOx emission factors do not show any strong trend, effectively reducing the ambient VOC/NOx ratio. This study presents the results of model analyses on the sensitivity of the observed ozone levels to the estimated historical changes in its precursors. The model sensitivity analyses used a well-validated base case simulation of a high pollution episode in the MCMA with the mathematical Decoupled Direct Method (DDM) and the standard Brute Force Method (BFM) in the 3-D CAMx chemical transport model. The model reproduces adequately the observed historical trends and current photochemical levels. Comparison of the BFM and the DDM sensitivity techniques indicates that the model yields ozone values that increase linearly with NOx emission reductions and decrease linearly with VOC emission reductions only up to 30% from the base case. We further performed emissions perturbations from the gasoline fleet, diesel fleet, all mobile (gasoline plus diesel) and all emission sources (anthropogenic plus biogenic). The results suggest that although large ozone reductions obtained in the past were from changes in emissions from gasoline vehicles, currently significant benefits could be achieved with additional emission control policies directed to regulation of VOC emissions from diesel and area sources that are high emitters of alkenes, aromatics and aldehydes.

  5. The mediating role of interpersonal cognition on the relationships between personality and adolescent ego development.

    PubMed

    Liu, Yih-Lan

    2013-01-01

    The author investigated whether interpersonal cognition mediated the relationships between defense, social sensitivity, and ego development. Participants (N = 616; M age = 15.66 years, SD = .52 year; 276 boys) from northwestern Taiwan completed a battery of questionnaires. Structural equation modeling and mediation analyses supported the hypothesis that interpersonal cognition would mediate the path between defense and ego development, and the path between social sensitivity and ego development. Defense and social sensitivity were found to have direct effects on ego development. The study provides evidence of the mediating effect of interpersonal cognition on the association between personality and ego development.

  6. Experimental study on cross-sensitivity of temperature and vibration of embedded fiber Bragg grating sensors

    NASA Astrophysics Data System (ADS)

    Chen, Tao; Ye, Meng-li; Liu, Shu-liang; Deng, Yan

    2018-03-01

    In view of the principle for occurrence of cross-sensitivity, a series of calibration experiments are carried out to solve the cross-sensitivity problem of embedded fiber Bragg gratings (FBGs) using the reference grating method. Moreover, an ultrasonic-vibration-assisted grinding (UVAG) model is established, and finite element analysis (FEA) is carried out under the monitoring environment of embedded temperature measurement system. In addition, the related temperature acquisition tests are set in accordance with requirements of the reference grating method. Finally, comparative analyses of the simulation and experimental results are performed, and it may be concluded that the reference grating method may be utilized to effectively solve the cross-sensitivity of embedded FBGs.

  7. The role of modelling in prioritising and planning clinical trials.

    PubMed

    Chilcott, J; Brennan, A; Booth, A; Karnon, J; Tappenden, P

    2003-01-01

    To identify the role of modelling in planning and prioritising trials. The review focuses on modelling methods used in the construction of disease models and on methods for their analysis and interpretation. Searches were initially developed in MEDLINE and then translated into other databases. Systematic reviews of the methodological and case study literature were undertaken. Search strategies focused on the intersection between three domains: modelling, health technology assessment and prioritisation. The review found that modelling can extend the validity of trials by: generalising from trial populations to specific target groups; generalising to other settings and countries; extrapolating trial outcomes to the longer term; linking intermediate outcome measures to final outcomes; extending analysis to the relevant comparators; adjusting for prognostic factors in trials; and synthesising research results. The review suggested that modelling may offer greatest benefits where the impact of a technology occurs over a long duration, where disease/technology characteristics are not observable, where there are long lead times in research, or for rapidly changing technologies. It was also found that modelling can inform the key parameters for research: sample size, trial duration and population characteristics. One-way, multi-way and threshold sensitivity analysis have been used in informing these aspects but are flawed. The payback approach has been piloted and while there have been weaknesses in its implementation, the approach does have potential. Expected value of information analysis is the only existing methodology that has been applied in practice and can address all these issues. The potential benefit of this methodology is that the value of research is directly related to its impact on technology commissioning decisions, and is demonstrated in real and absolute rather than relative terms; it assesses the technical efficiency of different types of research. Modelling is not a substitute for data collection. However, modelling can identify trial designs of low priority in informing health technology commissioning decisions. Good practice in undertaking and reporting economic modelling studies requires further dissemination and support, specifically in sensitivity analyses, model validation and the reporting of assumptions. Case studies of the payback approach using stochastic sensitivity analyses should be developed. Use of overall expected value of perfect information should be encouraged in modelling studies seeking to inform prioritisation and planning of health technology assessments. Research is required to assess if the potential benefits of value of information analysis can be realised in practice; on the definition of an adequate objective function; on methods for analysing computationally expensive models; and on methods for updating prior probability distributions.

  8. Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models

    USGS Publications Warehouse

    Rakovec, O.; Hill, Mary C.; Clark, M.P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.

    2014-01-01

    This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based “local” methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative “bucket-style” hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.

  9. [Study on the ingredients of reserpine by TLC-FT-SERS].

    PubMed

    Wang, Y; Zi, F; Wang, Y; Zhao, Y; Zhang, X; Weng, S

    1999-12-01

    A new method for analysing the ingredients of reserpine by thin layer chromatography (TLC) and surface-enhanced Raman spectroscopy (SERS) is reported in this paper. The results show that the characteristic spectral bands of reserpine satuated at the thin layer with the amount of sample about 2 microg were obtained. The difference between SERS and solid spectra was found. An absorption model of reserpine and silver sol was proposed. This method can be used to analyse the chemical ingredients with high sensitivity.

  10. The 2006 Cape Canaveral Air Force Station Range Reference Atmosphere Model Validation Study and Sensitivity Analysis to the National Aeronautics and Space Administration's Space Shuttle

    NASA Technical Reports Server (NTRS)

    Burns, Lee; Merry, Carl; Decker, Ryan; Harrington, Brian

    2008-01-01

    The 2006 Cape Canaveral Air Force Station (CCAFS) Range Reference Atmosphere (RRA) is a statistical model summarizing the wind and thermodynamic atmospheric variability from surface to 70 kin. Launches of the National Aeronautics and Space Administration's (NASA) Space Shuttle from Kennedy Space Center utilize CCAFS RRA data to evaluate environmental constraints on various aspects of the vehicle during ascent. An update to the CCAFS RRA was recently completed. As part of the update, a validation study on the 2006 version was conducted as well as a comparison analysis of the 2006 version to the existing CCAFS RRA database version 1983. Assessments to the Space Shuttle vehicle ascent profile characteristics were performed to determine impacts of the updated model to the vehicle performance. Details on the model updates and the vehicle sensitivity analyses with the update model are presented.

  11. Communications network design and costing model technical manual

    NASA Technical Reports Server (NTRS)

    Logan, K. P.; Somes, S. S.; Clark, C. A.

    1983-01-01

    This computer model provides the capability for analyzing long-haul trunking networks comprising a set of user-defined cities, traffic conditions, and tariff rates. Networks may consist of all terrestrial connectivity, all satellite connectivity, or a combination of terrestrial and satellite connectivity. Network solutions provide the least-cost routes between all cities, the least-cost network routing configuration, and terrestrial and satellite service cost totals. The CNDC model allows analyses involving three specific FCC-approved tariffs, which are uniquely structured and representative of most existing service connectivity and pricing philosophies. User-defined tariffs that can be variations of these three tariffs are accepted as input to the model and allow considerable flexibility in network problem specification. The resulting model extends the domain of network analysis from traditional fixed link cost (distance-sensitive) problems to more complex problems involving combinations of distance and traffic-sensitive tariffs.

  12. Sensitivity Analysis of earth and environmental models: a systematic review to guide scientific advancement

    NASA Astrophysics Data System (ADS)

    Wagener, Thorsten; Pianosi, Francesca

    2016-04-01

    Sensitivity Analysis (SA) investigates how the variation in the output of a numerical model can be attributed to variations of its input factors. SA is increasingly being used in earth and environmental modelling for a variety of purposes, including uncertainty assessment, model calibration and diagnostic evaluation, dominant control analysis and robust decision-making. Here we provide some practical advice regarding best practice in SA and discuss important open questions based on a detailed recent review of the existing body of work in SA. Open questions relate to the consideration of input factor interactions, methods for factor mapping and the formal inclusion of discrete factors in SA (for example for model structure comparison). We will analyse these questions using relevant examples and discuss possible ways forward. We aim at stimulating the discussion within the community of SA developers and users regarding the setting of good practices and on defining priorities for future research.

  13. Probabilistic sensitivity analysis for decision trees with multiple branches: use of the Dirichlet distribution in a Bayesian framework.

    PubMed

    Briggs, Andrew H; Ades, A E; Price, Martin J

    2003-01-01

    In structuring decision models of medical interventions, it is commonly recommended that only 2 branches be used for each chance node to avoid logical inconsistencies that can arise during sensitivity analyses if the branching probabilities do not sum to 1. However, information may be naturally available in an unconditional form, and structuring a tree in conditional form may complicate rather than simplify the sensitivity analysis of the unconditional probabilities. Current guidance emphasizes using probabilistic sensitivity analysis, and a method is required to provide probabilistic probabilities over multiple branches that appropriately represents uncertainty while satisfying the requirement that mutually exclusive event probabilities should sum to 1. The authors argue that the Dirichlet distribution, the multivariate equivalent of the beta distribution, is appropriate for this purpose and illustrate its use for generating a fully probabilistic transition matrix for a Markov model. Furthermore, they demonstrate that by adopting a Bayesian approach, the problem of observing zero counts for transitions of interest can be overcome.

  14. Context-dependent catalepsy intensification is due to classical conditioning and sensitization.

    PubMed

    Amtage, J; Schmidt, W J

    2003-11-01

    Haloperidol-induced catalepsy represents a model of neuroleptic-induced Parkinsonism. Daily administration of haloperidol, followed by testing for catalepsy on a bar and grid, results in a day-to-day increase in catalepsy that is completely context dependent, resulting in a strong placebo effect and in a failure of expression after a change in context. The aim of this study was to analyse the associative learning process that underlies context dependency. Catalepsy intensification was induced by a daily threshold dose of 0.25 mg/kg haloperidol. Extinction training and retesting under haloperidol revealed that sensitization was composed of two components: a context-conditioning component, which can be extinguished, and a context-dependent sensitization component, which cannot be extinguished. Context dependency of catalepsy thus follows precisely the same rules as context dependency of psychostimulant-induced sensitization. Catalepsy sensitization is therefore due to conditioning and sensitization.

  15. Re-evaluation of a novel approach for quantitative myocardial oedema detection by analysing tissue inhomogeneity in acute myocarditis using T2-mapping.

    PubMed

    Baeßler, Bettina; Schaarschmidt, Frank; Treutlein, Melanie; Stehning, Christian; Schnackenburg, Bernhard; Michels, Guido; Maintz, David; Bunck, Alexander C

    2017-12-01

    To re-evaluate a recently suggested approach of quantifying myocardial oedema and increased tissue inhomogeneity in myocarditis by T2-mapping. Cardiac magnetic resonance data of 99 patients with myocarditis were retrospectively analysed. Thirthy healthy volunteers served as controls. T2-mapping data were acquired at 1.5 T using a gradient-spin-echo T2-mapping sequence. T2-maps were segmented according to the 16-segments AHA-model. Segmental T2-values, segmental pixel-standard deviation (SD) and the derived parameters maxT2, maxSD and madSD were analysed and compared to the established Lake Louise criteria (LLC). A re-estimation of logistic regression models revealed that all models containing an SD-parameter were superior to any model containing global myocardial T2. Using a combined cut-off of 1.8 ms for madSD + 68 ms for maxT2 resulted in a diagnostic sensitivity of 75% and specificity of 80% and showed a similar diagnostic performance compared to LLC in receiver-operating-curve analyses. Combining madSD, maxT2 and late gadolinium enhancement (LGE) in a model resulted in a superior diagnostic performance compared to LLC (sensitivity 93%, specificity 83%). The results show that the novel T2-mapping-derived parameters exhibit an additional diagnostic value over LGE with the inherent potential to overcome the current limitations of T2-mapping. • A novel quantitative approach to myocardial oedema imaging in myocarditis was re-evaluated. • The T2-mapping-derived parameters maxT2 and madSD were compared to traditional Lake-Louise criteria. • Using maxT2 and madSD with dedicated cut-offs performs similarly to Lake-Louise criteria. • Adding maxT2 and madSD to LGE results in further increased diagnostic performance. • This novel approach has the potential to overcome the limitations of T2-mapping.

  16. Cost-utility analysis of an advanced pressure ulcer management protocol followed by trained wound, ostomy, and continence nurses.

    PubMed

    Kaitani, Toshiko; Nakagami, Gojiro; Iizaka, Shinji; Fukuda, Takashi; Oe, Makoto; Igarashi, Ataru; Mori, Taketoshi; Takemura, Yukie; Mizokami, Yuko; Sugama, Junko; Sanada, Hiromi

    2015-01-01

    The high prevalence of severe pressure ulcers (PUs) is an important issue that requires to be highlighted in Japan. In a previous study, we devised an advanced PU management protocol to enable early detection of and intervention for deep tissue injury and critical colonization. This protocol was effective for preventing more severe PUs. The present study aimed to compare the cost-effectiveness of the care provided using an advanced PU management protocol, from a medical provider's perspective, implemented by trained wound, ostomy, and continence nurses (WOCNs), with that of conventional care provided by a control group of WOCNs. A Markov model was constructed for a 1-year time horizon to determine the incremental cost-effectiveness ratio of advanced PU management compared with conventional care. The number of quality-adjusted life-years gained, and the cost in Japanese yen (¥) ($US1 = ¥120; 2015) was used as the outcome. Model inputs for clinical probabilities and related costs were based on our previous clinical trial results. Univariate sensitivity analyses were performed. Furthermore, a Bayesian multivariate probability sensitivity analysis was performed using Monte Carlo simulations with advanced PU management. Two different models were created for initial cohort distribution. For both models, the expected effectiveness for the intervention group using advanced PU management techniques was high, with a low expected cost value. The sensitivity analyses suggested that the results were robust. Intervention by WOCNs using advanced PU management techniques was more effective and cost-effective than conventional care. © 2015 by the Wound Healing Society.

  17. Genome-Wide Association Study of the Modified Stumvoll Insulin Sensitivity Index Identifies BCL2 and FAM19A2 as Novel Insulin Sensitivity Loci

    PubMed Central

    Gustafsson, Stefan; Rybin, Denis; Stančáková, Alena; Chen, Han; Liu, Ching-Ti; Hong, Jaeyoung; Jensen, Richard A.; Rice, Ken; Morris, Andrew P.; Mägi, Reedik; Tönjes, Anke; Prokopenko, Inga; Kleber, Marcus E.; Delgado, Graciela; Silbernagel, Günther; Jackson, Anne U.; Appel, Emil V.; Grarup, Niels; Lewis, Joshua P.; Montasser, May E.; Landenvall, Claes; Staiger, Harald; Luan, Jian’an; Frayling, Timothy M.; Weedon, Michael N.; Xie, Weijia; Morcillo, Sonsoles; Martínez-Larrad, María Teresa; Biggs, Mary L.; Chen, Yii-Der Ida; Corbaton-Anchuelo, Arturo; Færch, Kristine; Gómez-Zumaquero, Juan Miguel; Goodarzi, Mark O.; Kizer, Jorge R.; Koistinen, Heikki A.; Leong, Aaron; Lind, Lars; Lindgren, Cecilia; Machicao, Fausto; Manning, Alisa K.; Martín-Núñez, Gracia María; Rojo-Martínez, Gemma; Rotter, Jerome I.; Siscovick, David S.; Zmuda, Joseph M.; Zhang, Zhongyang; Serrano-Rios, Manuel; Smith, Ulf; Soriguer, Federico; Hansen, Torben; Jørgensen, Torben J.; Linnenberg, Allan; Pedersen, Oluf; Walker, Mark; Langenberg, Claudia; Scott, Robert A.; Wareham, Nicholas J.; Fritsche, Andreas; Häring, Hans-Ulrich; Stefan, Norbert; Groop, Leif; O’Connell, Jeff R.; Boehnke, Michael; Bergman, Richard N.; Collins, Francis S.; Mohlke, Karen L.; Tuomilehto, Jaakko; März, Winfried; Kovacs, Peter; Stumvoll, Michael; Psaty, Bruce M.; Kuusisto, Johanna; Laakso, Markku; Meigs, James B.; Dupuis, Josée; Ingelsson, Erik; Florez, Jose C.

    2016-01-01

    Genome-wide association studies (GWAS) have found few common variants that influence fasting measures of insulin sensitivity. We hypothesized that a GWAS of an integrated assessment of fasting and dynamic measures of insulin sensitivity would detect novel common variants. We performed a GWAS of the modified Stumvoll Insulin Sensitivity Index (ISI) within the Meta-Analyses of Glucose and Insulin-Related Traits Consortium. Discovery for genetic association was performed in 16,753 individuals, and replication was attempted for the 23 most significant novel loci in 13,354 independent individuals. Association with ISI was tested in models adjusted for age, sex, and BMI and in a model analyzing the combined influence of the genotype effect adjusted for BMI and the interaction effect between the genotype and BMI on ISI (model 3). In model 3, three variants reached genome-wide significance: rs13422522 (NYAP2; P = 8.87 × 10−11), rs12454712 (BCL2; P = 2.7 × 10−8), and rs10506418 (FAM19A2; P = 1.9 × 10−8). The association at NYAP2 was eliminated by conditioning on the known IRS1 insulin sensitivity locus; the BCL2 and FAM19A2 associations were independent of known cardiometabolic loci. In conclusion, we identified two novel loci and replicated known variants associated with insulin sensitivity. Further studies are needed to clarify the causal variant and function at the BCL2 and FAM19A2 loci. PMID:27416945

  18. Sensitivity of water resources in the Delaware River basin to climate variability and change

    USGS Publications Warehouse

    Ayers, Mark A.; Wolock, David M.; McCabe, Gregory J.; Hay, Lauren E.; Tasker, Gary D.

    1993-01-01

    Because of the "greenhouse effect," projected increases in atmospheric carbon dioxide levels might cause global warming, which in turn could result in changes in precipitation patterns and evapotranspiration and in increases in sea level. This report describes the greenhouse effect; discusses the problems and uncertainties associated with the detection, prediction, and effects of climatic change, and presents the results of sensitivity-analysis studies of the potential effects of climate change on water resources in the Delaware River basin. On the basis of sensitivity analyses, potentially serious shortfalls of certain water resources in the basin could result if some climatic-change scenarios become true. The results of basin streamflow-model simulations in this study demonstrate the difficulty in distinguishing effects of climatic change on streamflow and water supply from effects of natural variability in current climate. The future direction of basin changes in most water resources, furthermore, cannot be determined precisely because of uncertainty in current projections of regional temperature and precipitation. This large uncertainty indicates that, for resource planning, information defining the sensitivities of water resources to a range of climate change is most relevant. The sensitivity analyses could be useful in developing contingency plans on how to evaluate and respond to changes, should they occur.

  19. An economic evaluation of intravenous versus oral iron supplementation in people on haemodialysis.

    PubMed

    Wong, Germaine; Howard, Kirsten; Hodson, Elisabeth; Irving, Michelle; Craig, Jonathan C

    2013-02-01

    Iron supplementation can be administered either intravenously or orally in patients with chronic kidney disease (CKD) and iron deficiency anaemia, but practice varies widely. The aim of this study was to estimate the health care costs and benefits of parenteral iron compared with oral iron in haemodialysis patients receiving erythropoiesis-stimulating agents (ESAs). Using broad health care funder perspective, a probabilistic Markov model was constructed to compare the cost-effectiveness and cost-utility of parenteral iron therapy versus oral iron for the management of haemodialysis patients with relative iron deficiency. A series of one-way, multi-way and probabilistic sensitivity analyses were conducted to assess the robustness of the model structure and the extent in which the model's assumptions were sensitive to the uncertainties within the input variables. Compared with oral iron, the incremental cost-effectiveness ratios (ICERs) for parenteral iron were $74,760 per life year saved and $34,660 per quality-adjusted life year (QALY) gained. A series of one-way sensitivity analyses show that the ICER is most sensitive to the probability of achieving haemoglobin (Hb) targets using supplemental iron with a consequential decrease in the standard ESA doses and the relative increased risk in all-cause mortality associated with low Hb levels (Hb < 9.0 g/dL). If the willingness-to-pay threshold was set at $50,000/QALY, the proportions of simulations that showed parenteral iron was cost-effective compared with oral iron were over 90%. Assuming that there is an overall increased mortality risk associated with very low Hb level (<9.0 g/dL), using parenteral iron to achieve an Hb target between 9.5 and 12 g/L is cost-effective compared with oral iron therapy among haemodialysis patients with relative iron deficiency.

  20. A value-based medicine cost-utility analysis of idiopathic epiretinal membrane surgery.

    PubMed

    Gupta, Omesh P; Brown, Gary C; Brown, Melissa M

    2008-05-01

    To perform a reference case, cost-utility analysis of epiretinal membrane (ERM) surgery using current literature on outcomes and complications. Computer-based, value-based medicine analysis. Decision analyses were performed under two scenarios: ERM surgery in better-seeing eye and ERM surgery in worse-seeing eye. The models applied long-term published data primarily from the Blue Mountains Eye Study and the Beaver Dam Eye Study. Visual acuity and major complications were derived from 25-gauge pars plana vitrectomy studies. Patient-based, time trade-off utility values, Markov modeling, sensitivity analysis, and net present value adjustments were used in the design and calculation of results. Main outcome measures included the number of discounted quality-adjusted-life-years (QALYs) gained and dollars spent per QALY gained. ERM surgery in the better-seeing eye compared with observation resulted in a mean gain of 0.755 discounted QALYs (3% annual rate) per patient treated. This model resulted in $4,680 per QALY for this procedure. When sensitivity analysis was performed, utility values varied from $6,245 to $3,746/QALY gained, medical costs varied from $3,510 to $5,850/QALY gained, and ERM recurrence rate increased to $5,524/QALY. ERM surgery in the worse-seeing eye compared with observation resulted in a mean gain of 0.27 discounted QALYs per patient treated. The $/QALY was $16,146 with a range of $20,183 to $12,110 based on sensitivity analyses. Utility values ranged from $21,520 to $12,916/QALY and ERM recurrence rate increased to $16,846/QALY based on sensitivity analysis. ERM surgery is a very cost-effective procedure when compared with other interventions across medical subspecialties.

  1. The CERAD Neuropsychological Assessment Battery Is Sensitive to Alcohol-Related Cognitive Deficiencies in Elderly Patients: A Retrospective Matched Case-Control Study.

    PubMed

    Kaufmann, Liane; Huber, Stefan; Mayer, Daniel; Moeller, Korbinian; Marksteiner, Josef

    2018-04-01

    Adverse effects of heavy drinking on cognition have frequently been reported. In the present study, we systematically examined for the first time whether clinical neuropsychological assessments may be sensitive to alcohol abuse in elderly patients with suspected minor neurocognitive disorder. A total of 144 elderly with and without alcohol abuse (each group n=72; mean age 66.7 years) were selected from a patient pool of n=738 by applying propensity score matching (a statistical method allowing to match participants in experimental and control group by balancing various covariates to reduce selection bias). Accordingly, study groups were almost perfectly matched regarding age, education, gender, and Mini Mental State Examination score. Neuropsychological performance was measured using the CERAD (Consortium to Establish a Registry for Alzheimer's Disease). Classification analyses (i.e., decision tree and boosted trees models) were conducted to examine whether CERAD variables or total score contributed to group classification. Decision tree models disclosed that groups could be reliably classified based on the CERAD variables "Word List Discriminability" (tapping verbal recognition memory, 64% classification accuracy) and "Trail Making Test A" (measuring visuo-motor speed, 59% classification accuracy). Boosted tree analyses further indicated the sensitivity of "Word List Recall" (measuring free verbal recall) for discriminating elderly with versus without a history of alcohol abuse. This indicates that specific CERAD variables seem to be sensitive to alcohol-related cognitive dysfunctions in elderly patients with suspected minor neurocognitive disorder. (JINS, 2018, 24, 360-371).

  2. Sensitivity to Uncertainty in Asteroid Impact Risk Assessment

    NASA Astrophysics Data System (ADS)

    Mathias, D.; Wheeler, L.; Prabhu, D. K.; Aftosmis, M.; Dotson, J.; Robertson, D. K.

    2015-12-01

    The Engineering Risk Assessment (ERA) team at NASA Ames Research Center is developing a physics-based impact risk model for probabilistically assessing threats from potential asteroid impacts on Earth. The model integrates probabilistic sampling of asteroid parameter ranges with physics-based analyses of entry, breakup, and impact to estimate damage areas and casualties from various impact scenarios. Assessing these threats is a highly coupled, dynamic problem involving significant uncertainties in the range of expected asteroid characteristics, how those characteristics may affect the level of damage, and the fidelity of various modeling approaches and assumptions. The presented model is used to explore the sensitivity of impact risk estimates to these uncertainties in order to gain insight into what additional data or modeling refinements are most important for producing effective, meaningful risk assessments. In the extreme cases of very small or very large impacts, the results are generally insensitive to many of the characterization and modeling assumptions. However, the nature of the sensitivity can change across moderate-sized impacts. Results will focus on the value of additional information in this critical, mid-size range, and how this additional data can support more robust mitigation decisions.

  3. Meta-analysis for the comparison of two diagnostic tests to a common gold standard: A generalized linear mixed model approach.

    PubMed

    Hoyer, Annika; Kuss, Oliver

    2018-05-01

    Meta-analysis of diagnostic studies is still a rapidly developing area of biostatistical research. Especially, there is an increasing interest in methods to compare different diagnostic tests to a common gold standard. Restricting to the case of two diagnostic tests, in these meta-analyses the parameters of interest are the differences of sensitivities and specificities (with their corresponding confidence intervals) between the two diagnostic tests while accounting for the various associations across single studies and between the two tests. We propose statistical models with a quadrivariate response (where sensitivity of test 1, specificity of test 1, sensitivity of test 2, and specificity of test 2 are the four responses) as a sensible approach to this task. Using a quadrivariate generalized linear mixed model naturally generalizes the common standard bivariate model of meta-analysis for a single diagnostic test. If information on several thresholds of the tests is available, the quadrivariate model can be further generalized to yield a comparison of full receiver operating characteristic (ROC) curves. We illustrate our model by an example where two screening methods for the diagnosis of type 2 diabetes are compared.

  4. Potential clinical and economic outcomes of active beta-D-glucan surveillance with preemptive therapy for invasive candidiasis at intensive care units: a decision model analysis.

    PubMed

    Pang, Y-K; Ip, M; You, J H S

    2017-01-01

    Early initiation of antifungal treatment for invasive candidiasis is associated with change in mortality. Beta-D-glucan (BDG) is a fungal cell wall component and a serum diagnostic biomarker of fungal infection. Clinical findings suggested an association between reduced invasive candidiasis incidence in intensive care units (ICUs) and BDG-guided preemptive antifungal therapy. We evaluated the potential cost-effectiveness of active BDG surveillance with preemptive antifungal therapy in patients admitted to adult ICUs from the perspective of Hong Kong healthcare providers. A Markov model was designed to simulate the outcomes of active BDG surveillance with preemptive therapy (surveillance group) and no surveillance (standard care group). Candidiasis-associated outcome measures included mortality rate, quality-adjusted life year (QALY) loss, and direct medical cost. Model inputs were derived from the literature. Sensitivity analyses were conducted to evaluate the robustness of model results. In base-case analysis, the surveillance group was more costly (1387 USD versus 664 USD) (1 USD = 7.8 HKD), with lower candidiasis-associated mortality rate (0.653 versus 1.426 per 100 ICU admissions) and QALY loss (0.116 versus 0.254) than the standard care group. The incremental cost per QALY saved by the surveillance group was 5239 USD/QALY. One-way sensitivity analyses found base-case results to be robust to variations of all model inputs. In probabilistic sensitivity analysis, the surveillance group was cost-effective in 50 % and 100 % of 10,000 Monte Carlo simulations at willingness-to-pay (WTP) thresholds of 7200 USD/QALY and ≥27,800 USD/QALY, respectively. Active BDG surveillance with preemptive therapy appears to be highly cost-effective to reduce the candidiasis-associated mortality rate and save QALYs in the ICU setting.

  5. Design of a Model Execution Framework: Repetitive Object-Oriented Simulation Environment (ROSE)

    NASA Technical Reports Server (NTRS)

    Gray, Justin S.; Briggs, Jeffery L.

    2008-01-01

    The ROSE framework was designed to facilitate complex system analyses. It completely divorces the model execution process from the model itself. By doing so ROSE frees the modeler to develop a library of standard modeling processes such as Design of Experiments, optimizers, parameter studies, and sensitivity studies which can then be applied to any of their available models. The ROSE framework accomplishes this by means of a well defined API and object structure. Both the API and object structure are presented here with enough detail to implement ROSE in any object-oriented language or modeling tool.

  6. One size does not fit all: investigating doctors' stated preference heterogeneity for job incentives to inform policy in Thailand.

    PubMed

    Lagarde, Mylene; Pagaiya, Nonglak; Tangcharoensathian, Viroj; Blaauw, Duane

    2013-12-01

    This study investigates heterogeneity in Thai doctors' job preferences at the beginning of their career, with a view to inform the design of effective policies to retain them in rural areas. A discrete choice experiment was designed and administered to 198 young doctors. We analysed the data using several specifications of a random parameter model to account for various sources of preference heterogeneity. By modelling preference heterogeneity, we showed how sensitivity to different incentives varied in different sections of the population. In particular, doctors from rural backgrounds were more sensitive than others to a 45% salary increase and having a post near their home province, but they were less sensitive to a reduction in the number of on-call nights. On the basis of the model results, the effects of two types of interventions were simulated: introducing various incentives and modifying the population structure. The results of the simulations provide multiple elements for consideration for policy-makers interested in designing effective interventions. They also underline the interest of modelling preference heterogeneity carefully. Copyright © 2013 John Wiley & Sons, Ltd.

  7. Conventional Energy and Macronutrient Variables Distort the Accuracy of Children’s Dietary Reports: Illustrative Data from a Validation Study of Effect of Order Prompts

    PubMed Central

    Baxter, Suzanne Domel; Smith, Albert F.; Hardin, James W.; Nichols, Michele D.

    2008-01-01

    Objective Validation-study data are used to illustrate that conventional energy and macronutrient (protein, carbohydrate, fat) variables, which disregard accuracy of reported items and amounts, misrepresent reporting accuracy. Reporting-error-sensitive variables are proposed which classify reported items as matches or intrusions, and reported amounts as corresponding or overreported. Methods 58 girls and 63 boys were each observed eating school meals on 2 days separated by ≥4 weeks, and interviewed the morning after each observation day. One interview per child had forward-order (morning-to-evening) prompts; one had reverse-order prompts. Original food-item-level analyses found a sex-x-order prompt interaction for omission rates. Current analyses compared reference (observed) and reported information transformed to energy and macronutrients. Results Using conventional variables, reported amounts were less than reference amounts (ps<0.001; paired t-tests); report rates were higher for the first than second interview for energy, protein, and carbohydrate (ps≤0.049; mixed models). Using reporting-error-sensitive variables, correspondence rates were higher for girls with forward- but boys with reverse-order prompts (ps≤0.041; mixed models); inflation ratios were lower with reverse- than forward-order prompts for energy, carbohydrate, and fat (ps≤0.045; mixed models). Conclusions Conventional variables overestimated reporting accuracy and masked order prompt and sex effects. Reporting-error-sensitive variables are recommended when assessing accuracy for energy and macronutrients in validation studies. PMID:16959308

  8. Thalamic functional connectivity predicts seizure laterality in individual TLE patients: application of a biomarker development strategy.

    PubMed

    Barron, Daniel S; Fox, Peter T; Pardoe, Heath; Lancaster, Jack; Price, Larry R; Blackmon, Karen; Berry, Kristen; Cavazos, Jose E; Kuzniecky, Ruben; Devinsky, Orrin; Thesen, Thomas

    2015-01-01

    Noninvasive markers of brain function could yield biomarkers in many neurological disorders. Disease models constrained by coordinate-based meta-analysis are likely to increase this yield. Here, we evaluate a thalamic model of temporal lobe epilepsy that we proposed in a coordinate-based meta-analysis and extended in a diffusion tractography study of an independent patient population. Specifically, we evaluated whether thalamic functional connectivity (resting-state fMRI-BOLD) with temporal lobe areas can predict seizure onset laterality, as established with intracranial EEG. Twenty-four lesional and non-lesional temporal lobe epilepsy patients were studied. No significant differences in functional connection strength in patient and control groups were observed with Mann-Whitney Tests (corrected for multiple comparisons). Notwithstanding the lack of group differences, individual patient difference scores (from control mean connection strength) successfully predicted seizure onset zone as shown in ROC curves: discriminant analysis (two-dimensional) predicted seizure onset zone with 85% sensitivity and 91% specificity; logistic regression (four-dimensional) achieved 86% sensitivity and 100% specificity. The strongest markers in both analyses were left thalamo-hippocampal and right thalamo-entorhinal cortex functional connection strength. Thus, this study shows that thalamic functional connections are sensitive and specific markers of seizure onset laterality in individual temporal lobe epilepsy patients. This study also advances an overall strategy for the programmatic development of neuroimaging biomarkers in clinical and genetic populations: a disease model informed by coordinate-based meta-analysis was used to anatomically constrain individual patient analyses.

  9. Cost effectiveness of fingolimod, teriflunomide, dimethyl fumarate and intramuscular interferon-β1a in relapsing-remitting multiple sclerosis.

    PubMed

    Zhang, Xinke; Hay, Joel W; Niu, Xiaoli

    2015-01-01

    The aim of the study was to compare the cost effectiveness of fingolimod, teriflunomide, dimethyl fumarate, and intramuscular (IM) interferon (IFN)-β(1a) as first-line therapies in the treatment of patients with relapsing-remitting multiple sclerosis (RRMS). A Markov model was developed to evaluate the cost effectiveness of disease-modifying drugs (DMDs) from a US societal perspective. The time horizon in the base case was 5 years. The primary outcome was incremental net monetary benefit (INMB), and the secondary outcome was incremental cost-effectiveness ratio (ICER). The base case INMB willingness-to-pay (WTP) threshold was assumed to be US$150,000 per quality-adjusted life year (QALY), and the costs were in 2012 US dollars. One-way sensitivity analyses and probabilistic sensitivity analysis were conducted to test the robustness of the model results. Dimethyl fumarate dominated all other therapies over the range of WTPs, from US$0 to US$180,000. Compared with IM IFN-β(1a), at a WTP of US$150,000, INMBs were estimated at US$36,567, US$49,780, and US$80,611 for fingolimod, teriflunomide, and dimethyl fumarate, respectively. The ICER of fingolimod versus teriflunomide was US$3,201,672. One-way sensitivity analyses demonstrated the model results were sensitive to the acquisition costs of DMDs and the time horizon, but in most scenarios, cost-effectiveness rankings remained stable. Probabilistic sensitivity analysis showed that for more than 90% of the simulations, dimethyl fumarate was the optimal therapy across all WTP values. The three oral therapies were favored in the cost-effectiveness analysis. Of the four DMDs, dimethyl fumarate was a dominant therapy to manage RRMS. Apart from dimethyl fumarate, teriflunomide was the most cost-effective therapy compared with IM IFN-β(1a), with an ICER of US$7,115.

  10. The influence of track modelling options on the simulation of rail vehicle dynamics

    NASA Astrophysics Data System (ADS)

    Di Gialleonardo, Egidio; Braghin, Francesco; Bruni, Stefano

    2012-09-01

    This paper investigates the effect of different models for track flexibility on the simulation of railway vehicle running dynamics on tangent and curved track. To this end, a multi-body model of the rail vehicle is defined including track flexibility effects on three levels of detail: a perfectly rigid pair of rails, a sectional track model and a three-dimensional finite element track model. The influence of the track model on the calculation of the nonlinear critical speed is pointed out and it is shown that neglecting the effect of track flexibility results in an overestimation of the critical speed by more than 10%. Vehicle response to stochastic excitation from track irregularity is also investigated, analysing the effect of track flexibility models on the vertical and lateral wheel-rail contact forces. Finally, the effect of the track model on the calculation of dynamic forces produced by wheel out-of-roundness is analysed, showing that peak dynamic loads are very sensitive to the track model used in the simulation.

  11. Insensitive parenting may accelerate the development of the amygdala-medial prefrontal cortex circuit.

    PubMed

    Thijssen, Sandra; Muetzel, Ryan L; Bakermans-Kranenburg, Marian J; Jaddoe, Vincent W V; Tiemeier, Henning; Verhulst, Frank C; White, Tonya; Van Ijzendoorn, Marinus H

    2017-05-01

    This study examined whether the association between age and amygdala-medial prefrontal cortex (mPFC) connectivity in typically developing 6- to 10-year-old children is correlated with parental care. Resting-state functional magnetic resonance imaging scans were acquired from 124 children of the Generation R Study who at 4 years old had been observed interacting with their parents to assess maternal and paternal sensitivity. Amygdala functional connectivity was assessed using a general linear model with the amygdalae time series as explanatory variables. Higher level analyses assessing Sensitivity × Age as well as exploratory Sensitivity × Age × Gender interaction effects were performed restricted to voxels in the mPFC. We found significant Sensitivity × Age interaction effects on amygdala-mPFC connectivity. Age was related to stronger amygdala-mPFC connectivity in children with a lower combined parental sensitivity score (b = 0.11, p = .004, b = 0.06, p = .06, right and left amygdala, respectively), but not in children with a higher parental sensitivity score, (b = -0.07, p = .12, b = -0.06, p = .12, right and left amygdala, respectively). A similar effect was found for maternal sensitivity, with stronger amygdala-mPFC connectivity in children with less sensitive mothers. Exploratory (parental, maternal, paternal) Sensitivity × Age × Gender interaction analyses suggested that this effect was especially pronounced in girls. Amygdala-mPFC resting-state functional connectivity has been shown to increase from age 10.5 years onward, implying that the positive association between age and amygdala-mPFC connectivity in 6- to 10-year-old children of less sensitive parents represents accelerated development of the amygdala-mPFC circuit.

  12. iTOUGH2: A multiphysics simulation-optimization framework for analyzing subsurface systems

    NASA Astrophysics Data System (ADS)

    Finsterle, S.; Commer, M.; Edmiston, J. K.; Jung, Y.; Kowalsky, M. B.; Pau, G. S. H.; Wainwright, H. M.; Zhang, Y.

    2017-11-01

    iTOUGH2 is a simulation-optimization framework for the TOUGH suite of nonisothermal multiphase flow models and related simulators of geophysical, geochemical, and geomechanical processes. After appropriate parameterization of subsurface structures and their properties, iTOUGH2 runs simulations for multiple parameter sets and analyzes the resulting output for parameter estimation through automatic model calibration, local and global sensitivity analyses, data-worth analyses, and uncertainty propagation analyses. Development of iTOUGH2 is driven by scientific challenges and user needs, with new capabilities continually added to both the forward simulator and the optimization framework. This review article provides a summary description of methods and features implemented in iTOUGH2, and discusses the usefulness and limitations of an integrated simulation-optimization workflow in support of the characterization and analysis of complex multiphysics subsurface systems.

  13. Climate sensitivity of shrub growth across the tundra biome

    NASA Astrophysics Data System (ADS)

    Myers-Smith, Isla H.; Elmendorf, Sarah C.; Beck, Pieter S. A.; Wilmking, Martin; Hallinger, Martin; Blok, Daan; Tape, Ken D.; Rayback, Shelly A.; Macias-Fauria, Marc; Forbes, Bruce C.; Speed, James D. M.; Boulanger-Lapointe, Noémie; Rixen, Christian; Lévesque, Esther; Schmidt, Niels Martin; Baittinger, Claudia; Trant, Andrew J.; Hermanutz, Luise; Collier, Laura Siegwart; Dawes, Melissa A.; Lantz, Trevor C.; Weijers, Stef; Jørgensen, Rasmus Halfdan; Buchwal, Agata; Buras, Allan; Naito, Adam T.; Ravolainen, Virve; Schaepman-Strub, Gabriela; Wheeler, Julia A.; Wipf, Sonja; Guay, Kevin C.; Hik, David S.; Vellend, Mark

    2015-09-01

    Rapid climate warming in the tundra biome has been linked to increasing shrub dominance. Shrub expansion can modify climate by altering surface albedo, energy and water balance, and permafrost, yet the drivers of shrub growth remain poorly understood. Dendroecological data consisting of multi-decadal time series of annual shrub growth provide an underused resource to explore climate-growth relationships. Here, we analyse circumpolar data from 37 Arctic and alpine sites in 9 countries, including 25 species, and ~42,000 annual growth records from 1,821 individuals. Our analyses demonstrate that the sensitivity of shrub growth to climate was: (1) heterogeneous, with European sites showing greater summer temperature sensitivity than North American sites, and (2) higher at sites with greater soil moisture and for taller shrubs (for example, alders and willows) growing at their northern or upper elevational range edges. Across latitude, climate sensitivity of growth was greatest at the boundary between the Low and High Arctic, where permafrost is thawing and most of the global permafrost soil carbon pool is stored. The observed variation in climate-shrub growth relationships should be incorporated into Earth system models to improve future projections of climate change impacts across the tundra biome.

  14. Sensitivity Analysis and Parameter Estimation for a Reactive Transport Model of Uranium Bioremediation

    NASA Astrophysics Data System (ADS)

    Meyer, P. D.; Yabusaki, S.; Curtis, G. P.; Ye, M.; Fang, Y.

    2011-12-01

    A three-dimensional, variably-saturated flow and multicomponent biogeochemical reactive transport model of uranium bioremediation was used to generate synthetic data . The 3-D model was based on a field experiment at the U.S. Dept. of Energy Rifle Integrated Field Research Challenge site that used acetate biostimulation of indigenous metal reducing bacteria to catalyze the conversion of aqueous uranium in the +6 oxidation state to immobile solid-associated uranium in the +4 oxidation state. A key assumption in past modeling studies at this site was that a comprehensive reaction network could be developed largely through one-dimensional modeling. Sensitivity analyses and parameter estimation were completed for a 1-D reactive transport model abstracted from the 3-D model to test this assumption, to identify parameters with the greatest potential to contribute to model predictive uncertainty, and to evaluate model structure and data limitations. Results showed that sensitivities of key biogeochemical concentrations varied in space and time, that model nonlinearities and/or parameter interactions have a significant impact on calculated sensitivities, and that the complexity of the model's representation of processes affecting Fe(II) in the system may make it difficult to correctly attribute observed Fe(II) behavior to modeled processes. Non-uniformity of the 3-D simulated groundwater flux and averaging of the 3-D synthetic data for use as calibration targets in the 1-D modeling resulted in systematic errors in the 1-D model parameter estimates and outputs. This occurred despite using the same reaction network for 1-D modeling as used in the data-generating 3-D model. Predictive uncertainty of the 1-D model appeared to be significantly underestimated by linear parameter uncertainty estimates.

  15. Maladaptive Five Factor Model personality traits associated with Borderline Personality Disorder indirectly affect susceptibility to suicide ideation through increased anxiety sensitivity cognitive concerns.

    PubMed

    Tucker, Raymond P; Lengel, Greg J; Smith, Caitlin E; Capron, Dan W; Mullins-Sweatt, Stephanie N; Wingate, LaRicka R

    2016-12-30

    The current study investigated the relationship between maladaptive Five-Factor Model (FFM) personality traits, anxiety sensitivity cognitive concerns, and suicide ideation in a sample of 131 undergraduate students who were selected based on their scores on a screening questionnaire regarding Borderline Personality Disorder (BPD) symptoms. Those who endorsed elevated BPD symptoms in a pre-screen analyses completed at the beginning of each semester were oversampled in comparison to those with low or moderate symptoms. Indirect effect (mediation) results indicated that the maladaptive personality traits of anxious/uncertainty, dysregulated anger, self-disturbance, behavioral dysregulation, dissociative tendencies, distrust, manipulativeness, oppositional, and rashness had indirect effects on suicide ideation through anxiety sensitivity cognitive concerns. All of these personality traits correlated to suicide ideation as well. The maladaptive personality traits of despondence, affective dysregulation, and fragility were positive correlates of suicide ideation and predicted suicide ideation when all traits were entered in one linear regression model, but were not indirectly related through anxiety sensitivity cognitive concerns. The implication for targeting anxiety sensitivity cognitive concerns in evidence-based practices for reducing suicide risk in those with BPD is discussed. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  16. VIIRS-J1 Polarization Narrative

    NASA Technical Reports Server (NTRS)

    Waluschka, Eugene; McCorkel, Joel; McIntire, Jeff; Moyer, David; McAndrew, Brendan; Brown, Steven W.; Lykke, Keith; Butler, James; Meister, Gerhard; Thome, Kurtis J.

    2015-01-01

    The VIS/NIR bands polarization sensitivity of Joint Polar Satellite Sensor 1 (JPSS1) Visible/Infrared Imaging Radiometer Suite (VIIRS) instrument was measured using a broadband source. While polarization sensitivity for bands M5-M7, I1, and I2 was less than 2.5%, the maximum polarization sensitivity for bands M1, M2, M3, and M4 was measured to be 6.4%, 4.4%, 3.1%, and 4.3%, respectively with a polarization characterization uncertainty of less than 0.3%. A detailed polarization model indicated that the large polarization sensitivity observed in the M1 to M4 bands was mainly due to the large polarization sensitivity introduced at the leading and trailing edges of the newly manufactured VISNIR bandpass focal plane filters installed in front of the VISNIR detectors. This was confirmed by polarization measurements of bands M1 and M4 bands using monochromatic light. Discussed are the activities leading up to and including the instruments two polarization tests, some discussion of the polarization model and the model results, the role of the focal plane filters, the polarization testing of the Aft-Optics-Assembly, the testing of the polarizers at Goddard and NIST and the use of NIST's T-SIRCUS for polarization testing and associated analyses and results.

  17. Neoadjuvant therapy versus upfront surgical strategies in resectable pancreatic cancer: A Markov decision analysis.

    PubMed

    de Geus, S W L; Evans, D B; Bliss, L A; Eskander, M F; Smith, J K; Wolff, R A; Miksad, R A; Weinstein, M C; Tseng, J F

    2016-10-01

    Neoadjuvant therapy is gaining acceptance as a valid treatment option for borderline resectable pancreatic cancer; however, its value for clearly resectable pancreatic cancer remains controversial. The aim of this study was to use a Markov decision analysis model, in the absence of adequately powered randomized trials, to compare the life expectancy (LE) and quality-adjusted life expectancy (QALE) of neoadjuvant therapy to conventional upfront surgical strategies in resectable pancreatic cancer patients. A Markov decision model was created to compare two strategies: attempted pancreatic resection followed by adjuvant chemoradiotherapy and neoadjuvant chemoradiotherapy followed by restaging with, if appropriate, attempted pancreatic resection. Data obtained through a comprehensive systematic search in PUBMED of the literature from 2000 to 2015 were used to estimate the probabilities used in the model. Deterministic and probabilistic sensitivity analyses were performed. Of the 786 potentially eligible studies identified, 22 studies met the inclusion criteria and were used to extract the probabilities used in the model. Base case analyses of the model showed a higher LE (32.2 vs. 26.7 months) and QALE (25.5 vs. 20.8 quality-adjusted life months) for patients in the neoadjuvant therapy arm compared to upfront surgery. Probabilistic sensitivity analyses for LE and QALE revealed that neoadjuvant therapy is favorable in 59% and 60% of the cases respectively. Although conceptual, these data suggest that neoadjuvant therapy offers substantial benefit in LE and QALE for resectable pancreatic cancer patients. These findings highlight the value of further prospective randomized trials comparing neoadjuvant therapy to conventional upfront surgical strategies. Copyright © 2016 Elsevier Ltd, BASO ~ The Association for Cancer Surgery, and the European Society of Surgical Oncology. All rights reserved.

  18. Virtual Patients and Sensitivity Analysis of the Guyton Model of Blood Pressure Regulation: Towards Individualized Models of Whole-Body Physiology

    PubMed Central

    Moss, Robert; Grosse, Thibault; Marchant, Ivanny; Lassau, Nathalie; Gueyffier, François; Thomas, S. Randall

    2012-01-01

    Mathematical models that integrate multi-scale physiological data can offer insight into physiological and pathophysiological function, and may eventually assist in individualized predictive medicine. We present a methodology for performing systematic analyses of multi-parameter interactions in such complex, multi-scale models. Human physiology models are often based on or inspired by Arthur Guyton's whole-body circulatory regulation model. Despite the significance of this model, it has not been the subject of a systematic and comprehensive sensitivity study. Therefore, we use this model as a case study for our methodology. Our analysis of the Guyton model reveals how the multitude of model parameters combine to affect the model dynamics, and how interesting combinations of parameters may be identified. It also includes a “virtual population” from which “virtual individuals” can be chosen, on the basis of exhibiting conditions similar to those of a real-world patient. This lays the groundwork for using the Guyton model for in silico exploration of pathophysiological states and treatment strategies. The results presented here illustrate several potential uses for the entire dataset of sensitivity results and the “virtual individuals” that we have generated, which are included in the supplementary material. More generally, the presented methodology is applicable to modern, more complex multi-scale physiological models. PMID:22761561

  19. [Psychosocial stressors and pain sensitivity in chronic pain disorder with somatic and psychological factors (F45.41)].

    PubMed

    Studer, M; Stewart, J; Egloff, N; Zürcher, E; von Känel, R; Brodbeck, J; Grosse Holtforth, M

    2017-02-01

    Increased pain sensitivity is characteristic for patients with chronic pain disorder with somatic and psychological factors (F45.41). Persistent stress can induce, sustain, and intensify pain sensitivity, thereby modulating pain perception. In this context, it would be favorable to investigate which psychosocial stressors are empirically linked to pain sensitivity. The aim of this study was to examine the relationship between psychosocial stressors and pain sensitivity in a naturalistic sample of patients with chronic pain disorder with somatic and psychological factors (F45.41). We assessed 166 patients with chronic pain disorder with somatic and psychological factors (F45.41) at entry into an inpatient pain clinic. Pain sensitivity was measured with a pain provocation test (Algopeg) at the middle finger and earlobe. Stressors assessed were exposure to war experiences, adverse childhood experiences, illness-related inability to work, relationship problems, and potentially life-threatening accidents. Correlation analyses and structural equation modeling were used to examine which stressors showed the strongest prediction of pain sensitivity. Patients exhibited generally heightened pain sensitivity. Both exposure to war and illness-related inability to work showed significant bivariate correlations with pain sensitivity. In addition to age, they also predicted a further increase in pain sensitivity in the structural equation model. Bearing in mind the limitations of this cross-sectional study, these findings may contribute to a better understanding of the link between psychosocial stressors and pain sensitivity.

  20. Sensitivity Analysis for Coupled Aero-structural Systems

    NASA Technical Reports Server (NTRS)

    Giunta, Anthony A.

    1999-01-01

    A novel method has been developed for calculating gradients of aerodynamic force and moment coefficients for an aeroelastic aircraft model. This method uses the Global Sensitivity Equations (GSE) to account for the aero-structural coupling, and a reduced-order modal analysis approach to condense the coupling bandwidth between the aerodynamic and structural models. Parallel computing is applied to reduce the computational expense of the numerous high fidelity aerodynamic analyses needed for the coupled aero-structural system. Good agreement is obtained between aerodynamic force and moment gradients computed with the GSE/modal analysis approach and the same quantities computed using brute-force, computationally expensive, finite difference approximations. A comparison between the computational expense of the GSE/modal analysis method and a pure finite difference approach is presented. These results show that the GSE/modal analysis approach is the more computationally efficient technique if sensitivity analysis is to be performed for two or more aircraft design parameters.

  1. Are measures of pain sensitivity associated with pain and disability at 12-month follow up in chronic neck pain?

    PubMed

    Moloney, Niamh; Beales, Darren; Azoory, Roxanne; Hübscher, Markus; Waller, Robert; Gibbons, Rebekah; Rebbeck, Trudy

    2018-06-14

    Pain sensitivity and psychosocial issues are prognostic of poor outcome in acute neck disorders. However, knowledge of associations between pain sensitivity and ongoing pain and disability in chronic neck pain are lacking. We aimed to investigate associations of pain sensitivity with pain and disability at the 12-month follow-up in people with chronic neck pain. The predictor variables were: clinical and quantitative sensory testing (cold, pressure); neural tissue sensitivity; neuropathic symptoms; comorbidities; sleep; psychological distress; pain catastrophizing; pain intensity (for the model explaining disability at 12 months only); and disability (for the model explaining pain at 12 months only). Data were analysed using uni- and multivariate regression models to assess associations with pain and disability at the 12-month follow-up (n = 64 at baseline, n = 51 at follow-up). Univariable associations between all predictor variables and pain and disability were evident (r > 0.3; p < 0.05), except for cold and pressure pain thresholds and cold sensitivity. For disability at the 12-month follow-up, 24.0% of the variance was explained by psychological distress and comorbidities. For pain at 12 months, 39.8% of the variance was explained primarily by baseline disability. Neither clinical nor quantitative measures of pain sensitivity were meaningfully associated with long-term patient-reported outcomes in people with chronic neck pain, limiting their clinical application in evaluating prognosis. Copyright © 2018 John Wiley & Sons, Ltd.

  2. Variability in soybean yield in Brazil stemming from the interaction of heterogeneous management and climate variability

    NASA Astrophysics Data System (ADS)

    Cohn, A.; Bragança, A.; Jeffries, G. R.

    2017-12-01

    An increasing share of global agricultural production can be found in the humid tropics. Therefore, an improved understanding of the mechanisms governing variability in the output of tropical agricultural systems is of increasing importance for food security including through climate change adaptation. Yet, the long window over which many tropical crops can be sown, the diversity of crop varieties and management practices combine to challenge inference into climate risk to cropping output in analyses of tropical crop-climate sensitivity employing administrative data. In this paper, we leverage a newly developed spatially explicit dataset of soybean yields in Brazil to combat this problem. The dataset was built by training a model of remotely-sensed vegetation index data and land cover classification data using a rich in situ dataset of soybean yield and management variables collected over the period 2006 to 2016. The dataset contains soybean yields by plant date, cropping frequency, and maturity group for each 5km grid cell in Brazil. We model variation in these yields using an approach enabling the estimation of the influence of management factors on the sensitivity of soybean yields to variability in: cumulative solar radiation, extreme degree days, growing degree days, flooding rain in the harvest period, and dry spells in the rainy season. We find strong variation in climate sensitivity by management class. Planting date and maturity group each explained a great deal more variation in yield sensitivity than did cropping frequency. Brazil collects comparatively fine spatial resolution yield data. But, our attempt to replicate our results using administrative soy yield data revealed substantially lesser crop-climate sensitivity; suggesting that previous analyses employing administrative data may have underestimated climate risk to tropical soy production.

  3. Percolation analyses of observed and simulated galaxy clustering

    NASA Astrophysics Data System (ADS)

    Bhavsar, S. P.; Barrow, J. D.

    1983-11-01

    A percolation cluster analysis is performed on equivalent regions of the CFA redshift survey of galaxies and the 4000 body simulations of gravitational clustering made by Aarseth, Gott and Turner (1979). The observed and simulated percolation properties are compared and, unlike correlation and multiplicity function analyses, favour high density (Omega = 1) models with n = - 1 initial data. The present results show that the three-dimensional data are consistent with the degree of filamentary structure present in isothermal models of galaxy formation at the level of percolation analysis. It is also found that the percolation structure of the CFA data is a function of depth. Percolation structure does not appear to be a sensitive probe of intrinsic filamentary structure.

  4. Capturing strain localization behind a geosynthetic-reinforced soil wall

    NASA Astrophysics Data System (ADS)

    Lai, Timothy Y.; Borja, Ronaldo I.; Duvernay, Blaise G.; Meehan, Richard L.

    2003-04-01

    This paper presents the results of finite element (FE) analyses of shear strain localization that occurred in cohesionless soils supported by a geosynthetic-reinforced retaining wall. The innovative aspects of the analyses include capturing of the localized deformation and the accompanying collapse mechanism using a recently developed embedded strong discontinuity model. The case study analysed, reported in previous publications, consists of a 3.5-m tall, full-scale reinforced wall model deforming in plane strain and loaded by surcharge at the surface to failure. Results of the analysis suggest strain localization developing from the toe of the wall and propagating upward to the ground surface, forming a curved failure surface. This is in agreement with a well-documented failure mechanism experienced by the physical wall model showing internal failure surfaces developing behind the wall as a result of the surface loading. Important features of the analyses include mesh sensitivity studies and a comparison of the localization properties predicted by different pre-localization constitutive models, including a family of three-invariant elastoplastic constitutive models appropriate for frictional/dilatant materials. Results of the analysis demonstrate the potential of the enhanced FE method for capturing a collapse mechanism characterized by the presence of a failure, or slip, surface through earthen materials.

  5. A study of remote sensing as applied to regional and small watersheds. Volume 1: Summary report

    NASA Technical Reports Server (NTRS)

    Ambaruch, R.

    1974-01-01

    The accuracy of remotely sensed measurements to provide inputs to hydrologic models of watersheds is studied. A series of sensitivity analyses on continuous simulation models of three watersheds determined: (1)Optimal values and permissible tolerances of inputs to achieve accurate simulation of streamflow from the watersheds; (2) Which model inputs can be quantified from remote sensing, directly, indirectly or by inference; and (3) How accurate remotely sensed measurements (from spacecraft or aircraft) must be to provide a basis for quantifying model inputs within permissible tolerances.

  6. The Validity of Conscientiousness Is Overestimated in the Prediction of Job Performance.

    PubMed

    Kepes, Sven; McDaniel, Michael A

    2015-01-01

    Sensitivity analyses refer to investigations of the degree to which the results of a meta-analysis remain stable when conditions of the data or the analysis change. To the extent that results remain stable, one can refer to them as robust. Sensitivity analyses are rarely conducted in the organizational science literature. Despite conscientiousness being a valued predictor in employment selection, sensitivity analyses have not been conducted with respect to meta-analytic estimates of the correlation (i.e., validity) between conscientiousness and job performance. To address this deficiency, we reanalyzed the largest collection of conscientiousness validity data in the personnel selection literature and conducted a variety of sensitivity analyses. Publication bias analyses demonstrated that the validity of conscientiousness is moderately overestimated (by around 30%; a correlation difference of about .06). The misestimation of the validity appears to be due primarily to suppression of small effects sizes in the journal literature. These inflated validity estimates result in an overestimate of the dollar utility of personnel selection by millions of dollars and should be of considerable concern for organizations. The fields of management and applied psychology seldom conduct sensitivity analyses. Through the use of sensitivity analyses, this paper documents that the existing literature overestimates the validity of conscientiousness in the prediction of job performance. Our data show that effect sizes from journal articles are largely responsible for this overestimation.

  7. The Validity of Conscientiousness Is Overestimated in the Prediction of Job Performance

    PubMed Central

    2015-01-01

    Introduction Sensitivity analyses refer to investigations of the degree to which the results of a meta-analysis remain stable when conditions of the data or the analysis change. To the extent that results remain stable, one can refer to them as robust. Sensitivity analyses are rarely conducted in the organizational science literature. Despite conscientiousness being a valued predictor in employment selection, sensitivity analyses have not been conducted with respect to meta-analytic estimates of the correlation (i.e., validity) between conscientiousness and job performance. Methods To address this deficiency, we reanalyzed the largest collection of conscientiousness validity data in the personnel selection literature and conducted a variety of sensitivity analyses. Results Publication bias analyses demonstrated that the validity of conscientiousness is moderately overestimated (by around 30%; a correlation difference of about .06). The misestimation of the validity appears to be due primarily to suppression of small effects sizes in the journal literature. These inflated validity estimates result in an overestimate of the dollar utility of personnel selection by millions of dollars and should be of considerable concern for organizations. Conclusion The fields of management and applied psychology seldom conduct sensitivity analyses. Through the use of sensitivity analyses, this paper documents that the existing literature overestimates the validity of conscientiousness in the prediction of job performance. Our data show that effect sizes from journal articles are largely responsible for this overestimation. PMID:26517553

  8. Interparental Violence and Childhood Adjustment: How and Why Maternal Sensitivity is a Protective Factor

    PubMed Central

    Manning, Liviah G.; Davies, Patrick T.; Cicchetti, Dante

    2014-01-01

    This study examined sensitive parenting as a protective factor in relations between interparental violence and children’s coping and psychological adjustment. Using a multi-method approach, a high risk sample of 201 two-year olds and their mothers participated in three annual waves of data collection. Moderator analyses revealed that sensitive parenting buffered the risk posed by interparental violence on children’s changes in externalizing and prosocial development over a two year period. Tests of mediated moderation further indicated that sensitive parenting protected children from the vulnerability of growing up in a violent home through its association with lower levels of children’s angry reactivity to interparental conflict. Results highlight the significance of identifying the mechanisms that mediate protective factors in models of family adversity. PMID:25132541

  9. Sensitivity analysis of a coupled hydrodynamic-vegetation model using the effectively subsampled quadratures method (ESQM v5.2)

    NASA Astrophysics Data System (ADS)

    Kalra, Tarandeep S.; Aretxabaleta, Alfredo; Seshadri, Pranay; Ganju, Neil K.; Beudin, Alexis

    2017-12-01

    Coastal hydrodynamics can be greatly affected by the presence of submerged aquatic vegetation. The effect of vegetation has been incorporated into the Coupled Ocean-Atmosphere-Wave-Sediment Transport (COAWST) modeling system. The vegetation implementation includes the plant-induced three-dimensional drag, in-canopy wave-induced streaming, and the production of turbulent kinetic energy by the presence of vegetation. In this study, we evaluate the sensitivity of the flow and wave dynamics to vegetation parameters using Sobol' indices and a least squares polynomial approach referred to as the Effective Quadratures method. This method reduces the number of simulations needed for evaluating Sobol' indices and provides a robust, practical, and efficient approach for the parameter sensitivity analysis. The evaluation of Sobol' indices shows that kinetic energy, turbulent kinetic energy, and water level changes are affected by plant stem density, height, and, to a lesser degree, diameter. Wave dissipation is mostly dependent on the variation in plant stem density. Performing sensitivity analyses for the vegetation module in COAWST provides guidance to optimize efforts and reduce exploration of parameter space for future observational and modeling work.

  10. Sensitivity analyses of a colloid-facilitated contaminant transport model for unsaturated heterogeneous soil conditions.

    NASA Astrophysics Data System (ADS)

    Périard, Yann; José Gumiere, Silvio; Rousseau, Alain N.; Caron, Jean

    2013-04-01

    Certain contaminants may travel faster through soils when they are sorbed to subsurface colloidal particles. Indeed, subsurface colloids may act as carriers of some contaminants accelerating their translocation through the soil into the water table. This phenomenon is known as colloid-facilitated contaminant transport. It plays a significant role in contaminant transport in soils and has been recognized as a source of groundwater contamination. From a mechanistic point of view, the attachment/detachment of the colloidal particles from the soil matrix or from the air-water interface and the straining process may modify the hydraulic properties of the porous media. Šimůnek et al. (2006) developed a model that can simulate the colloid-facilitated contaminant transport in variably saturated porous media. The model is based on the solution of a modified advection-dispersion equation that accounts for several processes, namely: straining, exclusion and attachement/detachement kinetics of colloids through the soil matrix. The solutions of these governing, partial differential equations are obtained using a standard Galerkin-type, linear finite element scheme, implemented in the HYDRUS-2D/3D software (Šimůnek et al., 2012). Modeling colloid transport through the soil and the interaction of colloids with the soil matrix and other contaminants is complex and requires the characterization of many model parameters. In practice, it is very difficult to assess actual transport parameter values, so they are often calibrated. However, before calibration, one needs to know which parameters have the greatest impact on output variables. This kind of information can be obtained through a sensitivity analysis of the model. The main objective of this work is to perform local and global sensitivity analyses of the colloid-facilitated contaminant transport module of HYDRUS. Sensitivity analysis was performed in two steps: (i) we applied a screening method based on Morris' elementary effects and the one-at-a-time approach (O.A.T); and (ii), we applied Sobol's global sensitivity analysis method which is based on variance decompositions. Results illustrate that ψm (maximum sorption rate of mobile colloids), kdmc (solute desorption rate from mobile colloids), and Ks (saturated hydraulic conductivity) are the most sensitive parameters with respect to the contaminant travel time. The analyses indicate that this new module is able to simulate the colloid-facilitated contaminant transport. However, validations under laboratory conditions are needed to confirm the occurrence of the colloid transport phenomenon and to understand model prediction under non-saturated soil conditions. Future work will involve monitoring of the colloidal transport phenomenon through soil column experiments. The anticipated outcome will provide valuable information on the understanding of the dominant mechanisms responsible for colloidal transports, colloid-facilitated contaminant transport and, also, the colloid detachment/deposition processes impacts on soil hydraulic properties. References: Šimůnek, J., C. He, L. Pang, & S. A. Bradford, Colloid-Facilitated Solute Transport in Variably Saturated Porous Media: Numerical Model and Experimental Verification, Vadose Zone Journal, 2006, 5, 1035-1047 Šimůnek, J., M. Šejna, & M. Th. van Genuchten, The C-Ride Module for HYDRUS (2D/3D) Simulating Two-Dimensional Colloid-Facilitated Solute Transport in Variably-Saturated Porous Media, Version 1.0, PC Progress, Prague, Czech Republic, 45 pp., 2012.

  11. Decadal-scale sensitivity of Northeast Greenland ice flow to errors in surface mass balance using ISSM

    NASA Astrophysics Data System (ADS)

    Schlegel, N.-J.; Larour, E.; Seroussi, H.; Morlighem, M.; Box, J. E.

    2013-06-01

    The behavior of the Greenland Ice Sheet, which is considered a major contributor to sea level changes, is best understood on century and longer time scales. However, on decadal time scales, its response is less predictable due to the difficulty of modeling surface climate, as well as incomplete understanding of the dynamic processes responsible for ice flow. Therefore, it is imperative to understand how modeling advancements, such as increased spatial resolution or more comprehensive ice flow equations, might improve projections of ice sheet response to climatic trends. Here we examine how a finely resolved climate forcing influences a high-resolution ice stream model that considers longitudinal stresses. We simulate ice flow using a two-dimensional Shelfy-Stream Approximation implemented within the Ice Sheet System Model (ISSM) and use uncertainty quantification tools embedded within the model to calculate the sensitivity of ice flow within the Northeast Greenland Ice Stream to errors in surface mass balance (SMB) forcing. Our results suggest that the model tends to smooth ice velocities even when forced with extreme errors in SMB. Indeed, errors propagate linearly through the model, resulting in discharge uncertainty of 16% or 1.9 Gt/yr. We find that mass flux is most sensitive to local errors but is also affected by errors hundreds of kilometers away; thus, an accurate SMB map of the entire basin is critical for realistic simulation. Furthermore, sensitivity analyses indicate that SMB forcing needs to be provided at a resolution of at least 40 km.

  12. Sensitivity Analysis of PM2.5 in Seoul to Emissions and Reaction Rates Using the GEOS-Chem and its Adjoint Model

    NASA Astrophysics Data System (ADS)

    Lee, H. M.; Park, R.; Henze, D. K.; Shim, C.; Shin, H. J.; Song, I. H.; Park, J. S.; Park, S. M.; Moon, K. J.

    2015-12-01

    The sources of PM2.5 are poorly quantified in Seoul, Korea, where tens of millions of populations are daily exposed to the exceedance of PM2.5 concentrations to the air quality criteria. We used a global 3-D chemical transport model (GEOS-Chem) and its adjoint to investigate the sensitivities of PM2.5 concentrations in Seoul to emission sources, sectors, and chemical reaction rates. We first conduct forward model simulations using a nested version of GEOS-Chem with 0.25°x0.3125° spatial resolutions in East Asia for July 2012 - July 2013. We evaluated the model by comparing it with PM2.5 mass and chemical composition observations at National Institute of Environmental Research sites in Korea. The model reasonably reproduces the observed seasonal variability of PM2.5 concentrations (R=0.3-0.6), but tends to overestimate the observations in summer and underestimate them in winter. Our sensitivity analyses show the dominant contributions from local emission sources to PM2.5 concentrations in Seoul compared to the trans-boundary transport influences from the outside, which are important for long-lived tracers in spring. Other results including the model sensitivity to input parameters and the updated emissions are used to improve the model performance and to provide strategic information for the KORUS-AQ flight measurement campaign in May-June, 2016.

  13. Cost-effectiveness analysis in the Spanish setting of the PEAK trial of panitumumab plus mFOLFOX6 compared with bevacizumab plus mFOLFOX6 for first-line treatment of patients with wild-type RAS metastatic colorectal cancer.

    PubMed

    Rivera, Fernando; Valladares, Manuel; Gea, Salvador; López-Martínez, Noemí

    2017-06-01

    To assess the cost-effectiveness of panitumumab in combination with mFOLFOX6 (oxaliplatin, 5-fluorouracil, and leucovorin) vs bevacizumab in combination with mFOLFOX6 as first-line treatment of patients with wild-type RAS metastatic colorectal cancer (mCRC) in Spain. A semi-Markov model was developed including the following health states: Progression free; Progressive disease: Treat with best supportive care; Progressive disease: Treat with subsequent active therapy; Attempted resection of metastases; Disease free after metastases resection; Progressive disease: after resection and relapse; and Death. Parametric survival analyses of patient-level progression free survival and overall survival data from the PEAK Phase II clinical trial were used to estimate health state transitions. Additional data from the PEAK trial were considered for the dose and duration of therapy, the use of subsequent therapy, the occurrence of adverse events, and the incidence and probability of time to metastasis resection. Utility weightings were calculated from patient-level data from panitumumab trials evaluating first-, second-, and third-line treatments. The study was performed from the Spanish National Health System (NHS) perspective including only direct costs. A life-time horizon was applied. Probabilistic sensitivity analyses and scenario sensitivity analyses were performed to assess the robustness of the model. Based on the PEAK trial, which demonstrated greater efficacy of panitumumab vs bevacizumab, both in combination with mFOLFOX6 first-line in wild-type RAS mCRC patients, the estimated incremental cost per life-year gained was €16,567 and the estimated incremental cost per quality-adjusted life year gained was €22,794. The sensitivity analyses showed the model was robust to alternative parameters and assumptions. The analysis was based on a simulation model and, therefore, the results should be interpreted cautiously. Based on the PEAK Phase II clinical trial and taking into account Spanish costs, the results of the analysis showed that first-line treatment of mCRC with panitumumab + mFOLFOX6 could be considered a cost-effective option compared with bevacizumab + mFOLFOX6 for the Spanish NHS.

  14. Revisiting the cost-effectiveness of universal cervical length screening: importance of progesterone efficacy.

    PubMed

    Jain, Siddharth; Kilgore, Meredith; Edwards, Rodney K; Owen, John

    2016-07-01

    Preterm birth (PTB) is a significant cause of neonatal morbidity and mortality. Studies have shown that vaginal progesterone therapy for women diagnosed with shortened cervical length can reduce the risk of PTB. However, published cost-effectiveness analyses of vaginal progesterone for short cervix have not considered an appropriate range of clinically important parameters. To evaluate the cost-effectiveness of universal cervical length screening in women without a history of spontaneous PTB, assuming that all women with shortened cervical length receive progesterone to reduce the likelihood of PTB. A decision analysis model was developed to compare universal screening and no-screening strategies. The primary outcome was the cost-effectiveness ratio of both the strategies, defined as the estimated patient cost per quality-adjusted life-year (QALY) realized by the children. One-way sensitivity analyses were performed by varying progesterone efficacy to prevent PTB. A probabilistic sensitivity analysis was performed to address uncertainties in model parameter estimates. In our base-case analysis, assuming that progesterone reduces the likelihood of PTB by 11%, the incremental cost-effectiveness ratio for screening was $158,000/QALY. Sensitivity analyses show that these results are highly sensitive to the presumed efficacy of progesterone to prevent PTB. In a 1-way sensitivity analysis, screening results in cost-saving if progesterone can reduce PTB by 36%. Additionally, for screening to be cost-effective at WTP=$60,000 in three clinical scenarios, progesterone therapy has to reduce PTB by 60%, 34% and 93%. Screening is never cost-saving in the worst-case scenario or when serial ultrasounds are employed, but could be cost-saving with a two-day hospitalization only if progesterone were 64% effective. Cervical length screening and treatment with progesterone is a not a dominant, cost-effective strategy unless progesterone is more effective than has been suggested by available data for US women. Until future trials demonstrate greater progesterone efficacy, and effectiveness studies confirm a benefit from screening and treatment, the cost-effectiveness of universal cervical length screening in the United States remains questionable. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Urban tree mortality: a primer on demographic approaches

    Treesearch

    Lara A. Roman; John J. Battles; Joe R. McBride

    2016-01-01

    Realizing the benefits of tree planting programs depends on tree survival. Projections of urban forest ecosystem services and cost-benefit analyses are sensitive to assumptions about tree mortality rates. Long-term mortality data are needed to improve the accuracy of these models and optimize the public investment in tree planting. With more accurate population...

  16. Scaling in sensitivity analysis

    USGS Publications Warehouse

    Link, W.A.; Doherty, P.F.

    2002-01-01

    Population matrix models allow sets of demographic parameters to be summarized by a single value 8, the finite rate of population increase. The consequences of change in individual demographic parameters are naturally measured by the corresponding changes in 8; sensitivity analyses compare demographic parameters on the basis of these changes. These comparisons are complicated by issues of scale. Elasticity analysis attempts to deal with issues of scale by comparing the effects of proportional changes in demographic parameters, but leads to inconsistencies in evaluating demographic rates. We discuss this and other problems of scaling in sensitivity analysis, and suggest a simple criterion for choosing appropriate scales. We apply our suggestions to data for the killer whale, Orcinus orca.

  17. Synthesis of Trigeneration Systems: Sensitivity Analyses and Resilience

    PubMed Central

    Carvalho, Monica; Lozano, Miguel A.; Ramos, José; Serra, Luis M.

    2013-01-01

    This paper presents sensitivity and resilience analyses for a trigeneration system designed for a hospital. The following information is utilized to formulate an integer linear programming model: (1) energy service demands of the hospital, (2) technical and economical characteristics of the potential technologies for installation, (3) prices of the available utilities interchanged, and (4) financial parameters of the project. The solution of the model, minimizing the annual total cost, provides the optimal configuration of the system (technologies installed and number of pieces of equipment) and the optimal operation mode (operational load of equipment, interchange of utilities with the environment, convenience of wasting cogenerated heat, etc.) at each temporal interval defining the demand. The broad range of technical, economic, and institutional uncertainties throughout the life cycle of energy supply systems for buildings makes it necessary to delve more deeply into the fundamental properties of resilient systems: feasibility, flexibility and robustness. The resilience of the obtained solution is tested by varying, within reasonable limits, selected parameters: energy demand, amortization and maintenance factor, natural gas price, self-consumption of electricity, and time-of-delivery feed-in tariffs. PMID:24453881

  18. Synthesis of trigeneration systems: sensitivity analyses and resilience.

    PubMed

    Carvalho, Monica; Lozano, Miguel A; Ramos, José; Serra, Luis M

    2013-01-01

    This paper presents sensitivity and resilience analyses for a trigeneration system designed for a hospital. The following information is utilized to formulate an integer linear programming model: (1) energy service demands of the hospital, (2) technical and economical characteristics of the potential technologies for installation, (3) prices of the available utilities interchanged, and (4) financial parameters of the project. The solution of the model, minimizing the annual total cost, provides the optimal configuration of the system (technologies installed and number of pieces of equipment) and the optimal operation mode (operational load of equipment, interchange of utilities with the environment, convenience of wasting cogenerated heat, etc.) at each temporal interval defining the demand. The broad range of technical, economic, and institutional uncertainties throughout the life cycle of energy supply systems for buildings makes it necessary to delve more deeply into the fundamental properties of resilient systems: feasibility, flexibility and robustness. The resilience of the obtained solution is tested by varying, within reasonable limits, selected parameters: energy demand, amortization and maintenance factor, natural gas price, self-consumption of electricity, and time-of-delivery feed-in tariffs.

  19. WRF model sensitivity to choice of parameterization: a study of the `York Flood 1999'

    NASA Astrophysics Data System (ADS)

    Remesan, Renji; Bellerby, Tim; Holman, Ian; Frostick, Lynne

    2015-10-01

    Numerical weather modelling has gained considerable attention in the field of hydrology especially in un-gauged catchments and in conjunction with distributed models. As a consequence, the accuracy with which these models represent precipitation, sub-grid-scale processes and exceptional events has become of considerable concern to the hydrological community. This paper presents sensitivity analyses for the Weather Research Forecast (WRF) model with respect to the choice of physical parameterization schemes (both cumulus parameterisation (CPSs) and microphysics parameterization schemes (MPSs)) used to represent the `1999 York Flood' event, which occurred over North Yorkshire, UK, 1st-14th March 1999. The study assessed four CPSs (Kain-Fritsch (KF2), Betts-Miller-Janjic (BMJ), Grell-Devenyi ensemble (GD) and the old Kain-Fritsch (KF1)) and four MPSs (Kessler, Lin et al., WRF single-moment 3-class (WSM3) and WRF single-moment 5-class (WSM5)] with respect to their influence on modelled rainfall. The study suggests that the BMJ scheme may be a better cumulus parameterization choice for the study region, giving a consistently better performance than other three CPSs, though there are suggestions of underestimation. The WSM3 was identified as the best MPSs and a combined WSM3/BMJ model setup produced realistic estimates of precipitation quantities for this exceptional flood event. This study analysed spatial variability in WRF performance through categorical indices, including POD, FBI, FAR and CSI during York Flood 1999 under various model settings. Moreover, the WRF model was good at predicting high-intensity rare events over the Yorkshire region, suggesting it has potential for operational use.

  20. Combustor liner durability analysis

    NASA Technical Reports Server (NTRS)

    Moreno, V.

    1981-01-01

    An 18 month combustor liner durability analysis program was conducted to evaluate the use of advanced three dimensional transient heat transfer and nonlinear stress-strain analyses for modeling the cyclic thermomechanical response of a simulated combustor liner specimen. Cyclic life prediction technology for creep/fatigue interaction is evaluated for a variety of state-of-the-art tools for crack initiation and propagation. The sensitivity of the initiation models to a change in the operating conditions is also assessed.

  1. Lumbar Facet Joint Compressive Injury Induces Lasting Changes in Local Structure, Nociceptive Scores, and Inflammatory Mediators in a Novel Rat Model

    PubMed Central

    Henry, James L.; Yashpal, Kiran; Vernon, Howard; Kim, Jaesung; Im, Hee-Jeong

    2012-01-01

    Objective. To develop a novel animal model of persisting lumbar facet joint pain. Methods. Sprague Dawley rats were anaesthetized and the right lumbar (L5/L6) facet joint was exposed and compressed to ~1 mm with modified clamps applied for three minutes; sham-operated and naïve animals were used as control groups. After five days, animals were tested for hind-paw sensitivity using von Frey filaments and axial deep tissue sensitivity by algometer on assigned days up to 28 days. Animals were sacrificed at selected times for histological and biochemical analysis. Results. Histological sections revealed site-specific loss of cartilage in model animals only. Tactile hypersensitivity was observed for the ipsi- and contralateral paws lasting 28 days. The threshold at which deep tissue pressure just elicited vocalization was obtained at three lumbar levels; sensitivity at L1 > L3/4 > L6. Biochemical analyses revealed increases in proinflammatory cytokines, especially TNF-α, IL-1α, and IL-1β. Conclusions. These data suggest that compression of a facet joint induces a novel model of local cartilage loss accompanied by increased sensitivity to mechanical stimuli and by increases in inflammatory mediators. This new model may be useful for studies on mechanisms and treatment of lumbar facet joint pain and osteoarthritis. PMID:22966427

  2. Adult attachment status predicts the developmental trajectory of maternal sensitivity in new motherhood among Chinese mothers.

    PubMed

    Liang, X; Wang, Z-Y; Liu, H-Y; Lin, Q; Wang, Z; Liu, Y

    2015-01-01

    to investigate adult attachment status in first-time mothers, and stability and/or changes in maternal sensitivity during infancy. longitudinal study using quantitative and qualitative methods, and statistical modelling. Three home visits were undertaken when the infant was approximately six, nine and 14 months old. The Adult-to-Parental Attachment Experience Survey was used, and scores for three dimensions were obtained: secure-autonomous, preoccupied and dismissive. Maternal sensitivity was assessed at each time point using the Maternal Behaviour Q-Sort by observing interaction between the mother and infant at home. homes and community settings in greater metropolitan Beijing, North China. 83 mothers and infants born in 2010 enrolled in this study. Data were missing for one or more time points in 20 cases. the mean score for maternal sensitivity tended to increase from six to 14 months. Post-hoc analyses of one-way repeated-measures analysis of variance revealed that maternal sensitivity was significantly higher at 14 months than at six or nine months. An unconditional latent growth model (LGM) of maternal sensitivity, estimated using the Bayesian approach, provided a good fit for the data. Using three attachment-related variables as predictors in the conditional LGM, the model fitting indices were found to be sufficient, and the results suggested that the secure score positively predicted the intercept of the growth model, and the dismissive score negatively predicted both the intercept and slope of the growth model. maternal sensitivity increased over time during infancy. Furthermore, individual differences existed in the developmental trajectory, which was influenced by maternal attachment status. knowledge about attachment-related differences in the trajectory of first-time mothers' sensitivity to infants may help midwives and doctors to provide individualised information and support, with special attention given to mothers with a dismissive attachment status. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. [Gender-sensitive epidemiological data analysis: methodological aspects and empirical outcomes. Illustrated by a health reporting example].

    PubMed

    Jahn, I; Foraita, R

    2008-01-01

    In Germany gender-sensitive approaches are part of guidelines for good epidemiological practice as well as health reporting. They are increasingly claimed to realize the gender mainstreaming strategy in research funding by the federation and federal states. This paper focuses on methodological aspects of data analysis, as an empirical data example of which serves the health report of Bremen, a population-based cross-sectional study. Health reporting requires analysis and reporting methods that are able to discover sex/gender issues of questions, on the one hand, and consider how results can adequately be communicated, on the other hand. The core question is: Which consequences do a different inclusion of the category sex in different statistical analyses for identification of potential target groups have on the results? As evaluation methods logistic regressions as well as a two-stage procedure were exploratively conducted. This procedure combines graphical models with CHAID decision trees and allows for visualising complex results. Both methods are analysed by stratification as well as adjusted by sex/gender and compared with each other. As a result, only stratified analyses are able to detect differences between the sexes and within the sex/gender groups as long as one cannot resort to previous knowledge. Adjusted analyses can detect sex/gender differences only if interaction terms have been included in the model. Results are discussed from a statistical-epidemiological perspective as well as in the context of health reporting. As a conclusion, the question, if a statistical method is gender-sensitive, can only be answered by having concrete research questions and known conditions. Often, an appropriate statistic procedure can be chosen after conducting a separate analysis for women and men. Future gender studies deserve innovative study designs as well as conceptual distinctiveness with regard to the biological and the sociocultural elements of the category sex/gender.

  4. Estimation of the sensitive volume for gravitational-wave source populations using weighted Monte Carlo integration

    NASA Astrophysics Data System (ADS)

    Tiwari, Vaibhav

    2018-07-01

    The population analysis and estimation of merger rates of compact binaries is one of the important topics in gravitational wave astronomy. The primary ingredient in these analyses is the population-averaged sensitive volume. Typically, sensitive volume, of a given search to a given simulated source population, is estimated by drawing signals from the population model and adding them to the detector data as injections. Subsequently injections, which are simulated gravitational waveforms, are searched for by the search pipelines and their signal-to-noise ratio (SNR) is determined. Sensitive volume is estimated, by using Monte-Carlo (MC) integration, from the total number of injections added to the data, the number of injections that cross a chosen threshold on SNR and the astrophysical volume in which the injections are placed. So far, only fixed population models have been used in the estimation of binary black holes (BBH) merger rates. However, as the scope of population analysis broaden in terms of the methodologies and source properties considered, due to an increase in the number of observed gravitational wave (GW) signals, the procedure will need to be repeated multiple times at a large computational cost. In this letter we address the problem by performing a weighted MC integration. We show how a single set of generic injections can be weighted to estimate the sensitive volume for multiple population models; thereby greatly reducing the computational cost. The weights in this MC integral are the ratios of the output probabilities, determined by the population model and standard cosmology, and the injection probability, determined by the distribution function of the generic injections. Unlike analytical/semi-analytical methods, which usually estimate sensitive volume using single detector sensitivity, the method is accurate within statistical errors, comes at no added cost and requires minimal computational resources.

  5. Investigation of Navier-Stokes Code Verification and Design Optimization

    NASA Technical Reports Server (NTRS)

    Vaidyanathan, Rajkumar

    2004-01-01

    With rapid progress made in employing computational techniques for various complex Navier-Stokes fluid flow problems, design optimization problems traditionally based on empirical formulations and experiments are now being addressed with the aid of computational fluid dynamics (CFD). To be able to carry out an effective CFD-based optimization study, it is essential that the uncertainty and appropriate confidence limits of the CFD solutions be quantified over the chosen design space. The present dissertation investigates the issues related to code verification, surrogate model-based optimization and sensitivity evaluation. For Navier-Stokes (NS) CFD code verification a least square extrapolation (LSE) method is assessed. This method projects numerically computed NS solutions from multiple, coarser base grids onto a freer grid and improves solution accuracy by minimizing the residual of the discretized NS equations over the projected grid. In this dissertation, the finite volume (FV) formulation is focused on. The interplay between the xi concepts and the outcome of LSE, and the effects of solution gradients and singularities, nonlinear physics, and coupling of flow variables on the effectiveness of LSE are investigated. A CFD-based design optimization of a single element liquid rocket injector is conducted with surrogate models developed using response surface methodology (RSM) based on CFD solutions. The computational model consists of the NS equations, finite rate chemistry, and the k-6 turbulence closure. With the aid of these surrogate models, sensitivity and trade-off analyses are carried out for the injector design whose geometry (hydrogen flow angle, hydrogen and oxygen flow areas and oxygen post tip thickness) is optimized to attain desirable goals in performance (combustion length) and life/survivability (the maximum temperatures on the oxidizer post tip and injector face and a combustion chamber wall temperature). A preliminary multi-objective optimization study is carried out using a geometric mean approach. Following this, sensitivity analyses with the aid of variance-based non-parametric approach and partial correlation coefficients are conducted using data available from surrogate models of the objectives and the multi-objective optima to identify the contribution of the design variables to the objective variability and to analyze the variability of the design variables and the objectives. In summary the present dissertation offers insight into an improved coarse to fine grid extrapolation technique for Navier-Stokes computations and also suggests tools for a designer to conduct design optimization study and related sensitivity analyses for a given design problem.

  6. A single-index threshold Cox proportional hazard model for identifying a treatment-sensitive subset based on multiple biomarkers.

    PubMed

    He, Ye; Lin, Huazhen; Tu, Dongsheng

    2018-06-04

    In this paper, we introduce a single-index threshold Cox proportional hazard model to select and combine biomarkers to identify patients who may be sensitive to a specific treatment. A penalized smoothed partial likelihood is proposed to estimate the parameters in the model. A simple, efficient, and unified algorithm is presented to maximize this likelihood function. The estimators based on this likelihood function are shown to be consistent and asymptotically normal. Under mild conditions, the proposed estimators also achieve the oracle property. The proposed approach is evaluated through simulation analyses and application to the analysis of data from two clinical trials, one involving patients with locally advanced or metastatic pancreatic cancer and one involving patients with resectable lung cancer. Copyright © 2018 John Wiley & Sons, Ltd.

  7. Mesoscale research activities with the LAMPS model

    NASA Technical Reports Server (NTRS)

    Kalb, M. W.

    1985-01-01

    Researchers achieved full implementation of the LAMPS mesoscale model on the Atmospheric Sciences Division computer and derived balanced and real wind initial states for three case studies: March 6, April 24, April 26, 1982. Numerical simulations were performed for three separate studies: (1) a satellite moisture data impact study using Vertical Atmospheric Sounder (VAS) precipitable water as a constraint on model initial state moisture analyses; (2) an evaluation of mesoscale model precipitation simulation accuracy with and without convective parameterization; and (3) the sensitivity of model precipitation to mesoscale detail of moisture and vertical motion in an initial state.

  8. Hierarchical demographic approaches for assessing invasion dynamics of non-indigenous species: An example using northern snakehead (Channa argus)

    USGS Publications Warehouse

    Jiao, Y.; Lapointe, N.W.R.; Angermeier, P.L.; Murphy, B.R.

    2009-01-01

    Models of species' demographic features are commonly used to understand population dynamics and inform management tactics. Hierarchical demographic models are ideal for the assessment of non-indigenous species because our knowledge of non-indigenous populations is usually limited, data on demographic traits often come from a species' native range, these traits vary among populations, and traits are likely to vary considerably over time as species adapt to new environments. Hierarchical models readily incorporate this spatiotemporal variation in species' demographic traits by representing demographic parameters as multi-level hierarchies. As is done for traditional non-hierarchical matrix models, sensitivity and elasticity analyses are used to evaluate the contributions of different life stages and parameters to estimates of population growth rate. We applied a hierarchical model to northern snakehead (Channa argus), a fish currently invading the eastern United States. We used a Monte Carlo approach to simulate uncertainties in the sensitivity and elasticity analyses and to project future population persistence under selected management tactics. We gathered key biological information on northern snakehead natural mortality, maturity and recruitment in its native Asian environment. We compared the model performance with and without hierarchy of parameters. Our results suggest that ignoring the hierarchy of parameters in demographic models may result in poor estimates of population size and growth and may lead to erroneous management advice. In our case, the hierarchy used multi-level distributions to simulate the heterogeneity of demographic parameters across different locations or situations. The probability that the northern snakehead population will increase and harm the native fauna is considerable. Our elasticity and prognostic analyses showed that intensive control efforts immediately prior to spawning and/or juvenile-dispersal periods would be more effective (and probably require less effort) than year-round control efforts. Our study demonstrates the importance of considering the hierarchy of parameters in estimating population growth rate and evaluating different management strategies for non-indigenous invasive species. ?? 2009 Elsevier B.V.

  9. Sediment fingerprinting experiments to test the sensitivity of multivariate mixing models

    NASA Astrophysics Data System (ADS)

    Gaspar, Leticia; Blake, Will; Smith, Hugh; Navas, Ana

    2014-05-01

    Sediment fingerprinting techniques provide insight into the dynamics of sediment transfer processes and support for catchment management decisions. As questions being asked of fingerprinting datasets become increasingly complex, validation of model output and sensitivity tests are increasingly important. This study adopts an experimental approach to explore the validity and sensitivity of mixing model outputs for materials with contrasting geochemical and particle size composition. The experiments reported here focused on (i) the sensitivity of model output to different fingerprint selection procedures and (ii) the influence of source material particle size distributions on model output. Five soils with significantly different geochemistry, soil organic matter and particle size distributions were selected as experimental source materials. A total of twelve sediment mixtures were prepared in the laboratory by combining different quantified proportions of the < 63 µm fraction of the five source soils i.e. assuming no fluvial sorting of the mixture. The geochemistry of all source and mixture samples (5 source soils and 12 mixed soils) were analysed using X-ray fluorescence (XRF). Tracer properties were selected from 18 elements for which mass concentrations were found to be significantly different between sources. Sets of fingerprint properties that discriminate target sources were selected using a range of different independent statistical approaches (e.g. Kruskal-Wallis test, Discriminant Function Analysis (DFA), Principal Component Analysis (PCA), or correlation matrix). Summary results for the use of the mixing model with the different sets of fingerprint properties for the twelve mixed soils were reasonably consistent with the initial mixing percentages initially known. Given the experimental nature of the work and dry mixing of materials, geochemical conservative behavior was assumed for all elements, even for those that might be disregarded in aquatic systems (e.g. P). In general, the best fits between actual and modeled proportions were found using a set of nine tracer properties (Sr, Rb, Fe, Ti, Ca, Al, P, Si, K, Si) that were derived using DFA coupled with a multivariate stepwise algorithm, with errors between real and estimated value that did not exceed 6.7 % and values of GOF above 94.5 %. The second set of experiments aimed to explore the sensitivity of model output to variability in the particle size of source materials assuming that a degree of fluvial sorting of the resulting mixture took place. Most particle size correction procedures assume grain size affects are consistent across sources and tracer properties which is not always the case. Consequently, the < 40 µm fraction of selected soil mixtures was analysed to simulate the effect of selective fluvial transport of finer particles and the results were compared to those for source materials. Preliminary findings from this experiment demonstrate the sensitivity of the numerical mixing model outputs to different particle size distributions of source material and the variable impact of fluvial sorting on end member signatures used in mixing models. The results suggest that particle size correction procedures require careful scrutiny in the context of variable source characteristics.

  10. Single-particle strength from nucleon transfer in oxygen isotopes: Sensitivity to model parameters

    NASA Astrophysics Data System (ADS)

    Flavigny, F.; Keeley, N.; Gillibert, A.; Obertelli, A.

    2018-03-01

    In the analysis of transfer reaction data to extract nuclear structure information the choice of input parameters to the reaction model such as distorting potentials and overlap functions has a significant impact. In this paper we consider a set of data for the (d ,t ) and (d ,3He ) reactions on 14,16,18O as a well-delimited subject for a study of the sensitivity of such analyses to different choices of distorting potentials and overlap functions with particular reference to a previous investigation of the variation of valence nucleon correlations as a function of the difference in nucleon separation energy Δ S =| Sp-Sn| [Phys. Rev. Lett. 110, 122503 (2013), 10.1103/PhysRevLett.110.122503].

  11. Turbulent transport model of wind shear in thunderstorm gust fronts and warm fronts

    NASA Technical Reports Server (NTRS)

    Lewellen, W. S.; Teske, M. E.; Segur, H. C. O.

    1978-01-01

    A model of turbulent flow in the atmospheric boundary layer was used to simulate the low-level wind and turbulence profiles associated with both local thunderstorm gust fronts and synoptic-scale warm fronts. Dimensional analyses of both type fronts provided the physical scaling necessary to permit normalized simulations to represent fronts for any temperature jump. The sensitivity of the thunderstorm gust front to five different dimensionless parameters as well as a change from axisymmetric to planar geometry was examined. The sensitivity of the warm front to variations in the Rossby number was examined. Results of the simulations are discussed in terms of the conditions which lead to wind shears which are likely to be most hazardous for aircraft operations.

  12. Cost-effectiveness of vedolizumab compared with infliximab, adalimumab, and golimumab in patients with ulcerative colitis in the United Kingdom.

    PubMed

    Wilson, Michele R; Bergman, Annika; Chevrou-Severac, Helene; Selby, Ross; Smyth, Michael; Kerrigan, Matthew C

    2018-03-01

    To examine the clinical and economic impact of vedolizumab compared with infliximab, adalimumab, and golimumab in the treatment of moderately to severely active ulcerative colitis (UC) in the United Kingdom (UK). A decision analytic model in Microsoft Excel was used to compare vedolizumab with other biologic treatments (infliximab, adalimumab, and golimumab) for the treatment of biologic-naïve patients with UC in the UK. Efficacy data were obtained from a network meta-analysis using placebo as the common comparator. Other inputs (e.g., unit costs, adverse-event disutilities, probability of surgery, mortality) were obtained from published literature. Costs were presented in 2012/2013 British pounds. Outcomes included quality-adjusted life-years (QALYs). Costs and outcomes were discounted by 3.5% per year. Incremental cost-effectiveness ratios were presented for vedolizumab compared with other biologics. Univariate and multivariate probabilistic sensitivity analyses were conducted to assess model robustness to parameter uncertainty. The model predicted that anti-tumour necrosis factor-naïve patients on vedolizumab would accrue more QALY than patients on other biologics. The incremental results suggest that vedolizumab is a cost-effective treatment compared with adalimumab (incremental cost-effectiveness ratio of £22,735/QALY) and dominant compared with infliximab and golimumab. Sensitivity analyses suggest that results are most sensitive to treatment response and transition probabilities. However, vedolizumab is cost-effective irrespective of variation in any of the input parameters. Our model predicted that treatment with vedolizumab improves QALY, increases time in remission and response, and is a cost-effective treatment option compared with all other biologics for biologic-naïve patients with moderately to severely active UC.

  13. Use of drug-eluting stents versus bare-metal stents in Korea: a cost-minimization analysis using population data.

    PubMed

    Suh, Hae Sun; Song, Hyun Jin; Jang, Eun Jin; Kim, Jung-Sun; Choi, Donghoon; Lee, Sang Moo

    2013-07-01

    The goal of this study was to perform an economic analysis of a primary stenting with drug-eluting stents (DES) compared with bare-metal stents (BMS) in patients with acute myocardial infarction (AMI) admitted through an emergency room (ER) visit in Korea using population-based data. We employed a cost-minimization method using a decision analytic model with a two-year time period. Model probabilities and costs were obtained from a published systematic review and population-based data from which a retrospective database analysis of the national reimbursement database of Health Insurance Review and Assessment covering 2006 through 2010 was performed. Uncertainty was evaluated using one-way sensitivity analyses and probabilistic sensitivity analyses. Among 513 979 cases with AMI during 2007 and 2008, 24 742 cases underwent stenting procedures and 20 320 patients admitted through an ER visit with primary stenting were identified in the base model. The transition probabilities of DES-to-DES, DES-to-BMS, DES-to-coronary artery bypass graft, and DES-to-balloon were 59.7%, 0.6%, 4.3%, and 35.3%, respectively, among these patients. The average two-year costs of DES and BMS in 2011 Korean won were 11 065 528 won/person and 9 647 647 won/person, respectively. DES resulted in higher costs than BMS by 1 417 882 won/person. The model was highly sensitive to the probability and costs of having no revascularization. Primary stenting with BMS for AMI with an ER visit was shown to be a cost-saving procedure compared with DES in Korea. Caution is needed when applying this finding to patients with a higher level of severity in health status.

  14. Effects of Multi-Electrode Renal Denervation on Insulin Sensitivity and Glucose Metabolism in a Canine Model of Type 2 Diabetes Mellitus.

    PubMed

    Pan, Tao; Guo, Jin-He; Ling, Long; Qian, Yue; Dong, Yong-Hua; Yin, Hua-Qing; Zhu, Hai-Dong; Teng, Gao-Jun

    2018-05-01

    To evaluate the effects of multi-electrode catheter-based renal denervation (RDN) on insulin sensitivity and glucose metabolism in a type 2 diabetes mellitus (T2DM) canine model. Thirty-three dogs were divided equally into 3 groups: bilateral renal denervation (BRDN) group, left renal denervation (LRDN) group, and sham operation (SHAM) group. Body weight and blood biochemistry were measured at baseline, 20 weeks, and 32 weeks, and renal angiography and computerized tomographic (CT) angiography were determined before the procedure and 1 month, 2 months, and 3 months after the procedure. Western blot was used to identify the activities of gluconeogenic enzymes and insulin-signaling proteins. Fasting plasma glucose (9.64 ± 1.57 mmol/L vs 5.12 ± 1.08 mmol/L; P < .0001), fasting insulin (16.19 ± 1.43 mIU/mL vs 5.07 ± 1.13 mIU/mL; P < .0001), and homeostasis-model assessment of insulin resistance (HOMA-IR; 6.95 ± 1.33 vs 1.15 ± 0.33; P < .0001) in the BRDN group had significantly decreased at the 3-month follow-up compared with the SHAM group. Western blot analyses showed that RDN suppressed the gluconeogenetic genes, modulated insulin action, and activated insulin receptors-AKT signaling cascade in the liver. CT angiography and histopathologic analyses did not show any dissection, aneurysm, thrombus, or rupture in any of the renal arteries. These findings identified that multi-electrode catheter-based RDN could effectively decrease gluconeogenesis and glycogenolysis, resulting in improvements in insulin sensitivity and glucose metabolism in a T2DM canine model. Copyright © 2017 SIR. Published by Elsevier Inc. All rights reserved.

  15. Time-series analyses of air pollution and mortality in the United States: a subsampling approach.

    PubMed

    Moolgavkar, Suresh H; McClellan, Roger O; Dewanji, Anup; Turim, Jay; Luebeck, E Georg; Edwards, Melanie

    2013-01-01

    Hierarchical Bayesian methods have been used in previous papers to estimate national mean effects of air pollutants on daily deaths in time-series analyses. We obtained maximum likelihood estimates of the common national effects of the criteria pollutants on mortality based on time-series data from ≤ 108 metropolitan areas in the United States. We used a subsampling bootstrap procedure to obtain the maximum likelihood estimates and confidence bounds for common national effects of the criteria pollutants, as measured by the percentage increase in daily mortality associated with a unit increase in daily 24-hr mean pollutant concentration on the previous day, while controlling for weather and temporal trends. We considered five pollutants [PM10, ozone (O3), carbon monoxide (CO), nitrogen dioxide (NO2), and sulfur dioxide (SO2)] in single- and multipollutant analyses. Flexible ambient concentration-response models for the pollutant effects were considered as well. We performed limited sensitivity analyses with different degrees of freedom for time trends. In single-pollutant models, we observed significant associations of daily deaths with all pollutants. The O3 coefficient was highly sensitive to the degree of smoothing of time trends. Among the gases, SO2 and NO2 were most strongly associated with mortality. The flexible ambient concentration-response curve for O3 showed evidence of nonlinearity and a threshold at about 30 ppb. Differences between the results of our analyses and those reported from using the Bayesian approach suggest that estimates of the quantitative impact of pollutants depend on the choice of statistical approach, although results are not directly comparable because they are based on different data. In addition, the estimate of the O3-mortality coefficient depends on the amount of smoothing of time trends.

  16. Quantitative analysis of the thermal requirements for stepwise physical dormancy-break in seeds of the winter annual Geranium carolinianum (Geraniaceae)

    PubMed Central

    Gama-Arachchige, N. S.; Baskin, J. M.; Geneve, R. L.; Baskin, C. C.

    2013-01-01

    Background and Aims Physical dormancy (PY)-break in some annual plant species is a two-step process controlled by two different temperature and/or moisture regimes. The thermal time model has been used to quantify PY-break in several species of Fabaceae, but not to describe stepwise PY-break. The primary aims of this study were to quantify the thermal requirement for sensitivity induction by developing a thermal time model and to propose a mechanism for stepwise PY-breaking in the winter annual Geranium carolinianum. Methods Seeds of G. carolinianum were stored under dry conditions at different constant and alternating temperatures to induce sensitivity (step I). Sensitivity induction was analysed based on the thermal time approach using the Gompertz function. The effect of temperature on step II was studied by incubating sensitive seeds at low temperatures. Scanning electron microscopy, penetrometer techniques, and different humidity levels and temperatures were used to explain the mechanism of stepwise PY-break. Key Results The base temperature (Tb) for sensitivity induction was 17·2 °C and constant for all seed fractions of the population. Thermal time for sensitivity induction during step I in the PY-breaking process agreed with the three-parameter Gompertz model. Step II (PY-break) did not agree with the thermal time concept. Q10 values for the rate of sensitivity induction and PY-break were between 2·0 and 3·5 and between 0·02 and 0·1, respectively. The force required to separate the water gap palisade layer from the sub-palisade layer was significantly reduced after sensitivity induction. Conclusions Step I and step II in PY-breaking of G. carolinianum are controlled by chemical and physical processes, respectively. This study indicates the feasibility of applying the developed thermal time model to predict or manipulate sensitivity induction in seeds with two-step PY-breaking processes. The model is the first and most detailed one yet developed for sensitivity induction in PY-break. PMID:23456728

  17. Comprehensive Design Reliability Activities for Aerospace Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Christenson, R. L.; Whitley, M. R.; Knight, K. C.

    2000-01-01

    This technical publication describes the methodology, model, software tool, input data, and analysis result that support aerospace design reliability studies. The focus of these activities is on propulsion systems mechanical design reliability. The goal of these activities is to support design from a reliability perspective. Paralleling performance analyses in schedule and method, this requires the proper use of metrics in a validated reliability model useful for design, sensitivity, and trade studies. Design reliability analysis in this view is one of several critical design functions. A design reliability method is detailed and two example analyses are provided-one qualitative and the other quantitative. The use of aerospace and commercial data sources for quantification is discussed and sources listed. A tool that was developed to support both types of analyses is presented. Finally, special topics discussed include the development of design criteria, issues of reliability quantification, quality control, and reliability verification.

  18. Use of social network analysis and global sensitivity and uncertainty analyses to better understand an influenza outbreak.

    PubMed

    Liu, Jianhua; Jiang, Hongbo; Zhang, Hao; Guo, Chun; Wang, Lei; Yang, Jing; Nie, Shaofa

    2017-06-27

    In the summer of 2014, an influenza A(H3N2) outbreak occurred in Yichang city, Hubei province, China. A retrospective study was conducted to collect and interpret hospital and epidemiological data on it using social network analysis and global sensitivity and uncertainty analyses. Results for degree (χ2=17.6619, P<0.0001) and betweenness(χ2=21.4186, P<0.0001) centrality suggested that the selection of sampling objects were different between traditional epidemiological methods and newer statistical approaches. Clique and network diagrams demonstrated that the outbreak actually consisted of two independent transmission networks. Sensitivity analysis showed that the contact coefficient (k) was the most important factor in the dynamic model. Using uncertainty analysis, we were able to better understand the properties and variations over space and time on the outbreak. We concluded that use of newer approaches were significantly more efficient for managing and controlling infectious diseases outbreaks, as well as saving time and public health resources, and could be widely applied on similar local outbreaks.

  19. A general method for handling missing binary outcome data in randomized controlled trials

    PubMed Central

    Jackson, Dan; White, Ian R; Mason, Dan; Sutton, Stephen

    2014-01-01

    Aims The analysis of randomized controlled trials with incomplete binary outcome data is challenging. We develop a general method for exploring the impact of missing data in such trials, with a focus on abstinence outcomes. Design We propose a sensitivity analysis where standard analyses, which could include ‘missing = smoking’ and ‘last observation carried forward’, are embedded in a wider class of models. Setting We apply our general method to data from two smoking cessation trials. Participants A total of 489 and 1758 participants from two smoking cessation trials. Measurements The abstinence outcomes were obtained using telephone interviews. Findings The estimated intervention effects from both trials depend on the sensitivity parameters used. The findings differ considerably in magnitude and statistical significance under quite extreme assumptions about the missing data, but are reasonably consistent under more moderate assumptions. Conclusions A new method for undertaking sensitivity analyses when handling missing data in trials with binary outcomes allows a wide range of assumptions about the missing data to be assessed. In two smoking cessation trials the results were insensitive to all but extreme assumptions. PMID:25171441

  20. Maternal sensitivity, infant limbic structure volume and functional connectivity: a preliminary study

    PubMed Central

    Rifkin-Graboi, A; Kong, L; Sim, L W; Sanmugam, S; Broekman, B F P; Chen, H; Wong, E; Kwek, K; Saw, S-M; Chong, Y-S; Gluckman, P D; Fortier, M V; Pederson, D; Meaney, M J; Qiu, A

    2015-01-01

    Mechanisms underlying the profound parental effects on cognitive, emotional and social development in humans remain poorly understood. Studies with nonhuman models suggest variations in parental care affect the limbic system, influential to learning, autobiography and emotional regulation. In some research, nonoptimal care relates to decreases in neurogenesis, although other work suggests early-postnatal social adversity accelerates the maturation of limbic structures associated with emotional learning. We explored whether maternal sensitivity predicts human limbic system development and functional connectivity patterns in a small sample of human infants. When infants were 6 months of age, 20 mother–infant dyads attended a laboratory-based observational session and the infants underwent neuroimaging at the same age. After considering age at imaging, household income and postnatal maternal anxiety, regression analyses demonstrated significant indirect associations between maternal sensitivity and bilateral hippocampal volume at six months, with the majority of associations between sensitivity and the amygdala demonstrating similar indirect, but not significant results. Moreover, functional analyses revealed direct associations between maternal sensitivity and connectivity between the hippocampus and areas important for emotional regulation and socio-emotional functioning. Sensitivity additionally predicted indirect associations between limbic structures and regions related to autobiographical memory. Our volumetric results are consistent with research indicating accelerated limbic development in response to early social adversity, and in combination with our functional results, if replicated in a larger sample, may suggest that subtle, but important, variations in maternal care influence neuroanatomical trajectories important to future cognitive and emotional functioning. PMID:26506054

  1. Parametric modelling of cost data in medical studies.

    PubMed

    Nixon, R M; Thompson, S G

    2004-04-30

    The cost of medical resources used is often recorded for each patient in clinical studies in order to inform decision-making. Although cost data are generally skewed to the right, interest is in making inferences about the population mean cost. Common methods for non-normal data, such as data transformation, assuming asymptotic normality of the sample mean or non-parametric bootstrapping, are not ideal. This paper describes possible parametric models for analysing cost data. Four example data sets are considered, which have different sample sizes and degrees of skewness. Normal, gamma, log-normal, and log-logistic distributions are fitted, together with three-parameter versions of the latter three distributions. Maximum likelihood estimates of the population mean are found; confidence intervals are derived by a parametric BC(a) bootstrap and checked by MCMC methods. Differences between model fits and inferences are explored.Skewed parametric distributions fit cost data better than the normal distribution, and should in principle be preferred for estimating the population mean cost. However for some data sets, we find that models that fit badly can give similar inferences to those that fit well. Conversely, particularly when sample sizes are not large, different parametric models that fit the data equally well can lead to substantially different inferences. We conclude that inferences are sensitive to choice of statistical model, which itself can remain uncertain unless there is enough data to model the tail of the distribution accurately. Investigating the sensitivity of conclusions to choice of model should thus be an essential component of analysing cost data in practice. Copyright 2004 John Wiley & Sons, Ltd.

  2. Seeking heavy Higgs bosons through cascade decays

    NASA Astrophysics Data System (ADS)

    Coleppa, Baradhwaj; Fuks, Benjamin; Poulose, P.; Sahoo, Shibananda

    2018-04-01

    We investigate the LHC discovery prospects for a heavy Higgs boson decaying into the standard model Higgs boson and additional weak bosons. We consider a generic model-independent new physics configuration where this decay proceeds via a cascade involving other intermediate scalar bosons and focus on an LHC final-state signature comprised either of four b -jets and two charged leptons or of four charged leptons and two b -jets. We design two analyses of the corresponding signals, and demonstrate that a 5 σ discovery at the 14 TeV LHC is possible for various combinations of the parent and daughter Higgs-boson masses. We moreover find that the standard model backgrounds can be sufficiently rejected to guarantee the reconstruction of the parent Higgs boson mass. We apply our analyses to the Type-II two-Higgs-doublet model and identify the regions of the parameter space to which the LHC is sensitive.

  3. ITOUGH2(UNIX). Inverse Modeling for TOUGH2 Family of Multiphase Flow Simulators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finsterle, S.

    1999-03-01

    ITOUGH2 provides inverse modeling capabilities for the TOUGH2 family of numerical simulators for non-isothermal multiphase flows in fractured-porous media. The ITOUGH2 can be used for estimating parameters by automatic modeling calibration, for sensitivity analyses, and for uncertainity propagation analyses (linear and Monte Carlo simulations). Any input parameter to the TOUGH2 simulator can be estimated based on any type of observation for which a corresponding TOUGH2 output is calculated. ITOUGH2 solves a non-linear least-squares problem using direct or gradient-based minimization algorithms. A detailed residual and error analysis is performed, which includes the evaluation of model identification criteria. ITOUGH2 can also bemore » run in forward mode, solving subsurface flow problems related to nuclear waste isolation, oil, gas, and geothermal resevoir engineering, and vadose zone hydrology.« less

  4. Effect of element size on the solution accuracies of finite-element heat transfer and thermal stress analyses of space shuttle orbiter

    NASA Technical Reports Server (NTRS)

    Ko, William L.; Olona, Timothy

    1987-01-01

    The effect of element size on the solution accuracies of finite-element heat transfer and thermal stress analyses of space shuttle orbiter was investigated. Several structural performance and resizing (SPAR) thermal models and NASA structural analysis (NASTRAN) structural models were set up for the orbiter wing midspan bay 3. The thermal model was found to be the one that determines the limit of finite-element fineness because of the limitation of computational core space required for the radiation view factor calculations. The thermal stresses were found to be extremely sensitive to a slight variation of structural temperature distributions. The minimum degree of element fineness required for the thermal model to yield reasonably accurate solutions was established. The radiation view factor computation time was found to be insignificant compared with the total computer time required for the SPAR transient heat transfer analysis.

  5. Modelling generalisation and power dissipation of flexible-wheel suspension concept for planetary surface vehicles

    NASA Astrophysics Data System (ADS)

    Cao, Dongpu; Khajepour, Amir; Song, Xubin

    2011-08-01

    Flexible-wheel (FW) suspension concept has been regarded to be one of the novel technologies for future planetary surface vehicles (PSVs). This study develops generalised models for fundamental stiffness and damping properties and power consumption characteristics of the FW suspension with and without considering wheel-hub dimensions. Compliance rolling resistance (CRR) coefficient is also defined and derived for the FW suspension. Based on the generalised models and two dimensionless measures, suspension properties are analysed for two FW suspension configurations. The sensitivity analysis is performed to investigate the effects of the design parameters and operating conditions on the CRR and power consumption characteristic of the FW suspension. The modelling generalisation permits analyses of fundamental properties and power consumption characteristics of different FW suspension designs in a uniform and very convenient manner, which would serve as a theoretical foundation for the design of FW suspensions for future PSVs.

  6. Economic Evaluation of Apixaban for the Prevention of Stroke in Non-Valvular Atrial Fibrillation in the Netherlands

    PubMed Central

    Stevanović, Jelena; Pompen, Marjolein; Le, Hoa H.; Rozenbaum, Mark H.; Tieleman, Robert G.; Postma, Maarten J.

    2014-01-01

    Background Stroke prevention is the main goal of treating patients with atrial fibrillation (AF). Vitamin-K antagonists (VKAs) present an effective treatment in stroke prevention, however, the risk of bleeding and the requirement for regular coagulation monitoring are limiting their use. Apixaban is a novel oral anticoagulant associated with significantly lower hazard rates for stroke, major bleedings and treatment discontinuations, compared to VKAs. Objective To estimate the cost-effectiveness of apixaban compared to VKAs in non-valvular AF patients in the Netherlands. Methods Previously published lifetime Markov model using efficacy data from the ARISTOTLE and the AVERROES trial was modified to reflect the use of oral anticoagulants in the Netherlands. Dutch specific costs, baseline population stroke risk and coagulation monitoring levels were incorporated. Univariate, probabilistic sensitivity and scenario analyses on the impact of different coagulation monitoring levels were performed on the incremental cost-effectiveness ratio (ICER). Results Treatment with apixaban compared to VKAs resulted in an ICER of €10,576 per quality adjusted life year (QALY). Those findings correspond with lower number of strokes and bleedings associated with the use of apixaban compared to VKAs. Univariate sensitivity analyses revealed model sensitivity to the absolute stroke risk with apixaban and treatment discontinuations risks with apixaban and VKAs. The probability that apixaban is cost-effective at a willingness-to-pay threshold of €20,000/QALY was 68%. Results of the scenario analyses on the impact of different coagulation monitoring levels were quite robust. Conclusions In patients with non-valvular AF, apixaban is likely to be a cost-effective alternative to VKAs in the Netherlands. PMID:25093723

  7. Cost-effectiveness of pharmacist-participated warfarin therapy management in Thailand.

    PubMed

    Saokaew, Surasak; Permsuwan, Unchalee; Chaiyakunapruk, Nathorn; Nathisuwan, Surakit; Sukonthasarn, Apichard; Jeanpeerapong, Napawan

    2013-10-01

    Although pharmacist-participated warfarin therapy management (PWTM) is well established, the economic evaluation of PWTM is still lacking particularly in Asia-Pacific region. The objective of this study was to estimate the cost-effectiveness of PWTM in Thailand using local data where available. A Markov model was used to compare lifetime costs and quality-adjusted life years (QALYs) accrued to patients receiving warfarin therapy through PWTM or usual care (UC). The model was populated with relevant information from both health care system and societal perspectives. Input data were obtained from literatures and database analyses. Incremental cost-effectiveness ratios (ICERs) were presented as year 2012 values. A base-case analysis was performed for patients at age 45 years old. Sensitivity analyses including one-way and probabilistic sensitivity analyses were constructed to determine the robustness of the findings. From societal perspective, PWTM and UC results in 39.5 and 38.7 QALY, respectively. Thus, PWTM increase QALY by 0.79, and increase costs by 92,491 THB (3,083 USD) compared with UC (ICER 116,468 THB [3,882.3 USD] per QALY gained). While, from health care system perspective, PWTM also results in 0.79 QALY, and increase costs by 92,788 THB (3,093 USD) compared with UC (ICER 116,842 THB [3,894.7 USD] per QALY gained). Thus, PWTM was cost-effective compared with usual care, assuming willingness-to-pay (WTP) of 150,000 THB/QALY. Results were sensitive to the discount rate and cost of clinic set-up. Our finding suggests that PWTM is a cost-effective intervention. Policy-makers may consider our finding as part of information in their decision-making for implementing this strategy into healthcare benefit package. Further updates when additional data available are needed. © 2013.

  8. Medialization thyroplasty versus injection laryngoplasty: a cost minimization analysis.

    PubMed

    Tam, Samantha; Sun, Hongmei; Sarma, Sisira; Siu, Jennifer; Fung, Kevin; Sowerby, Leigh

    2017-02-20

    Medialization thyroplasty and injection laryngoplasty are widely accepted treatment options for unilateral vocal fold paralysis. Although both procedures result in similar clinical outcomes, little is known about the corresponding medical care costs. Medialization thyroplasty requires expensive operating room resources while injection laryngoplasty utilizes outpatient resources but may require repeated procedures. The purpose of this study, therefore, is to quantify the cost differences in adult patients with unilateral vocal fold paralysis undergoing medialization thyroplasty versus injection laryngoplasty. Cost minimization analysis conducted using a decision tree model. A decision tree model was constructed to capture clinical scenarios for medialization thyroplasty and injection laryngoplasty. Probabilities for various events were obtained from a retrospective cohort from the London Health Sciences Centre, Canada. Costs were derived from the published literature and the London Health Science Centre. All costs were reported in 2014 Canadian dollars. Time horizon was 5 years. The study was conducted from an academic hospital perspective in Canada. Various sensitivity analyses were conducted to assess differences in procedure-specific costs and probabilities of key events. Sixty-three patients underwent medialization thyroplasty and 41 underwent injection laryngoplasty. Cost of medialization thyroplasty was C$2499.10 per patient whereas those treated with injection laryngoplasty cost C$943.19. Results showed that cost savings with IL were C$1555.91. Deterministic and probabilistic sensitivity analyses suggested cost savings ranged from C$596 to C$3626. Treatment with injection laryngoplasty results in cost savings of C$1555.91 per patient. Our extensive sensitivity analyses suggest that switching from medialization thyroplasty to injection laryngoplasty will lead to a minimum cost savings of C$596 per patient. Considering the significant cost savings and similar effectiveness, injection laryngoplasty should be strongly considered as a preferred treatment option for patients diagnosed with unilateral vocal fold paralysis.

  9. Cost Effectiveness of Influenza Vaccine Choices in Children Aged 2–8 Years in the U.S.

    PubMed Central

    Smith, Kenneth J.; Raviotta, Jonathan M.; DePasse, Jay V.; Brown, Shawn T.; Shim, Eunha; Nowalk, Mary Patricia; Zimmerman, Richard K.

    2015-01-01

    Introduction Prior evidence found live attenuated influenza vaccine (LAIV) more effective than inactivated influenza vaccine (IIV) in children aged 2–8 years, leading CDC in 2014 to prefer LAIV use in this group. However, since 2013, LAIV has not proven superior, leading CDC in 2015 to rescind their LAIV preference statement. Here, the cost effectiveness of preferred LAIV use compared with IIV in children aged 2–8 years is estimated. Methods A Markov model estimated vaccination strategy cost effectiveness in terms of cost per quality-adjusted life year gained. Base case assumptions were: equal vaccine uptake, IIV use when LAIV was not indicated (in 11.7% of the cohort), and no indirect vaccination effects. Sensitivity analyses included estimates of indirect effects from both equation- and agent-based models. Analyses were performed in 2014–2015. Results Using prior effectiveness data in children aged 2–8 years (LAIV=83%, IIV=64%), preferred LAIV use was less costly and more effective than IIV (dominant), with results sensitive only to LAIV and IIV effectiveness variation. Using 2014–2015 U.S. effectiveness data (LAIV=0%, IIV=15%), IIV was dominant. In two-way sensitivity analyses, LAIV use was cost saving over the entire range of IIV effectiveness (0%–81%) when absolute LAIV effectiveness was >7.1% higher than IIV, but never cost saving when absolute LAIV effectiveness was <3.5% higher than IIV. Conclusions Results support CDC’s decision to no longer prefer LAIV use and provide guidance on effectiveness differences between influenza vaccines that might lead to preferential LAIV recommendation for children aged 2–8 years. PMID:26868283

  10. MAG-EPA resolves lung inflammation in an allergic model of asthma.

    PubMed

    Morin, C; Fortin, S; Cantin, A M; Rousseau, É

    2013-09-01

    Asthma is a chronic disease characterized by airways hyperresponsiveness, inflammation and airways remodelling involving reversible bronchial obstruction. Omega-3 fatty acids and their derivatives are known to reduce inflammation in several tissues including lung. The effects of eicosapentaenoic acid monoacylglyceride (MAG-EPA), a newly synthesized EPA derivative, were determined on the resolution of lung inflammation and airway hyperresponsiveness in an in vivo model of allergic asthma. Ovalbumin (OVA)-sensitized guinea-pigs were treated or not with MAG-EPA administered per os. Isometric tension measurements, histological analyses, homogenate preparation for Western blot experiments or total RNA extraction for RT-PCR were performed to assess the effect of MAG-EPA treatments. Mechanical tension measurements revealed that oral MAG-EPA treatments reduced methacholine (MCh)-induced bronchial hyperresponsiveness in OVA-sensitized guinea-pigs. Moreover, MAG-EPA treatments also decreased Ca(2+) hypersensitivity of bronchial smooth muscle. Histological analyses and leucocyte counts in bronchoalveolar lavages revealed that oral MAG-EPA treatments led to less inflammatory cell recruitment in the lung of OVA-sensitized guinea-pigs when compared with lungs from control animals. Results also revealed a reduction in mucin production and MUC5AC expression level in OVA-sensitized animals treated with MAG-EPA. Following MAG-EPA treatments, the transcript levels of pro-inflammatory markers such as IL-5, eotaxin, IL-13 and IL-4 were markedly reduced. Moreover, per os MAG-EPA administrations reduced COX2 over-expression in OVA-sensitized animals. We demonstrate that MAG-EPA reduces airway hyperresponsiveness and lung inflammation in OVA-sensitized animals, a finding consistent with a decrease in IL-4, IL-5, IL-13, COX-2 and MUC5AC expression levels in the lung. The present data suggest that MAG-EPA represents a new potential therapeutic strategy for resolving inflammation in allergic asthma. © 2013 John Wiley & Sons Ltd.

  11. Cost-effectiveness analysis of cochlear dose reduction by proton beam therapy for medulloblastoma in childhood.

    PubMed

    Hirano, Emi; Fuji, Hiroshi; Onoe, Tsuyoshi; Kumar, Vinay; Shirato, Hiroki; Kawabuchi, Koichi

    2014-03-01

    The aim of this study is to evaluate the cost-effectiveness of proton beam therapy with cochlear dose reduction compared with conventional X-ray radiotherapy for medulloblastoma in childhood. We developed a Markov model to describe health states of 6-year-old children with medulloblastoma after treatment with proton or X-ray radiotherapy. The risks of hearing loss were calculated on cochlear dose for each treatment. Three types of health-related quality of life (HRQOL) of EQ-5D, HUI3 and SF-6D were used for estimation of quality-adjusted life years (QALYs). The incremental cost-effectiveness ratio (ICER) for proton beam therapy compared with X-ray radiotherapy was calculated for each HRQOL. Sensitivity analyses were performed to model uncertainty in these parameters. The ICER for EQ-5D, HUI3 and SF-6D were $21 716/QALY, $11 773/QALY, and $20 150/QALY, respectively. One-way sensitivity analyses found that the results were sensitive to discount rate, the risk of hearing loss after proton therapy, and costs of proton irradiation. Cost-effectiveness acceptability curve analysis revealed a 99% probability of proton therapy being cost effective at a societal willingness-to-pay value. Proton beam therapy with cochlear dose reduction improves health outcomes at a cost that is within the acceptable cost-effectiveness range from the payer's standpoint.

  12. Analyses of microstructural and elastic properties of porous SOFC cathodes based on focused ion beam tomography

    NASA Astrophysics Data System (ADS)

    Chen, Zhangwei; Wang, Xin; Giuliani, Finn; Atkinson, Alan

    2015-01-01

    Mechanical properties of porous SOFC electrodes are largely determined by their microstructures. Measurements of the elastic properties and microstructural parameters can be achieved by modelling of the digitally reconstructed 3D volumes based on the real electrode microstructures. However, the reliability of such measurements is greatly dependent on the processing of raw images acquired for reconstruction. In this work, the actual microstructures of La0.6Sr0.4Co0.2Fe0.8O3-δ (LSCF) cathodes sintered at an elevated temperature were reconstructed based on dual-beam FIB/SEM tomography. Key microstructural and elastic parameters were estimated and correlated. Analyses of their sensitivity to the grayscale threshold value applied in the image segmentation were performed. The important microstructural parameters included porosity, tortuosity, specific surface area, particle and pore size distributions, and inter-particle neck size distribution, which may have varying extent of effect on the elastic properties simulated from the microstructures using FEM. Results showed that different threshold value range would result in different degree of sensitivity for a specific parameter. The estimated porosity and tortuosity were more sensitive than surface area to volume ratio. Pore and neck size were found to be less sensitive than particle size. Results also showed that the modulus was essentially sensitive to the porosity which was largely controlled by the threshold value.

  13. Addressing global uncertainty and sensitivity in first-principles based microkinetic models by an adaptive sparse grid approach

    NASA Astrophysics Data System (ADS)

    Döpking, Sandra; Plaisance, Craig P.; Strobusch, Daniel; Reuter, Karsten; Scheurer, Christoph; Matera, Sebastian

    2018-01-01

    In the last decade, first-principles-based microkinetic modeling has been developed into an important tool for a mechanistic understanding of heterogeneous catalysis. A commonly known, but hitherto barely analyzed issue in this kind of modeling is the presence of sizable errors from the use of approximate Density Functional Theory (DFT). We here address the propagation of these errors to the catalytic turnover frequency (TOF) by global sensitivity and uncertainty analysis. Both analyses require the numerical quadrature of high-dimensional integrals. To achieve this efficiently, we utilize and extend an adaptive sparse grid approach and exploit the confinement of the strongly non-linear behavior of the TOF to local regions of the parameter space. We demonstrate the methodology on a model of the oxygen evolution reaction at the Co3O4 (110)-A surface, using a maximum entropy error model that imposes nothing but reasonable bounds on the errors. For this setting, the DFT errors lead to an absolute uncertainty of several orders of magnitude in the TOF. We nevertheless find that it is still possible to draw conclusions from such uncertain models about the atomistic aspects controlling the reactivity. A comparison with derivative-based local sensitivity analysis instead reveals that this more established approach provides incomplete information. Since the adaptive sparse grids allow for the evaluation of the integrals with only a modest number of function evaluations, this approach opens the way for a global sensitivity analysis of more complex models, for instance, models based on kinetic Monte Carlo simulations.

  14. ASPASIA: A toolkit for evaluating the effects of biological interventions on SBML model behaviour.

    PubMed

    Evans, Stephanie; Alden, Kieran; Cucurull-Sanchez, Lourdes; Larminie, Christopher; Coles, Mark C; Kullberg, Marika C; Timmis, Jon

    2017-02-01

    A calibrated computational model reflects behaviours that are expected or observed in a complex system, providing a baseline upon which sensitivity analysis techniques can be used to analyse pathways that may impact model responses. However, calibration of a model where a behaviour depends on an intervention introduced after a defined time point is difficult, as model responses may be dependent on the conditions at the time the intervention is applied. We present ASPASIA (Automated Simulation Parameter Alteration and SensItivity Analysis), a cross-platform, open-source Java toolkit that addresses a key deficiency in software tools for understanding the impact an intervention has on system behaviour for models specified in Systems Biology Markup Language (SBML). ASPASIA can generate and modify models using SBML solver output as an initial parameter set, allowing interventions to be applied once a steady state has been reached. Additionally, multiple SBML models can be generated where a subset of parameter values are perturbed using local and global sensitivity analysis techniques, revealing the model's sensitivity to the intervention. To illustrate the capabilities of ASPASIA, we demonstrate how this tool has generated novel hypotheses regarding the mechanisms by which Th17-cell plasticity may be controlled in vivo. By using ASPASIA in conjunction with an SBML model of Th17-cell polarisation, we predict that promotion of the Th1-associated transcription factor T-bet, rather than inhibition of the Th17-associated transcription factor RORγt, is sufficient to drive switching of Th17 cells towards an IFN-γ-producing phenotype. Our approach can be applied to all SBML-encoded models to predict the effect that intervention strategies have on system behaviour. ASPASIA, released under the Artistic License (2.0), can be downloaded from http://www.york.ac.uk/ycil/software.

  15. Correlation of ground tests and analyses of a dynamically scaled Space Station model configuration

    NASA Technical Reports Server (NTRS)

    Javeed, Mehzad; Edighoffer, Harold H.; Mcgowan, Paul E.

    1993-01-01

    Verification of analytical models through correlation with ground test results of a complex space truss structure is demonstrated. A multi-component, dynamically scaled space station model configuration is the focus structure for this work. Previously established test/analysis correlation procedures are used to develop improved component analytical models. Integrated system analytical models, consisting of updated component analytical models, are compared with modal test results to establish the accuracy of system-level dynamic predictions. Design sensitivity model updating methods are shown to be effective for providing improved component analytical models. Also, the effects of component model accuracy and interface modeling fidelity on the accuracy of integrated model predictions is examined.

  16. Larval connectivity of pearl oyster through biophysical modelling; evidence of food limitation and broodstock effect

    NASA Astrophysics Data System (ADS)

    Thomas, Yoann; Dumas, Franck; Andréfouët, Serge

    2016-12-01

    The black-lip pearl oyster (Pinctada margaritifera) is cultured extensively to produce black pearls, especially in French Polynesia atoll lagoons. This aquaculture relies on spat collection, a process that experiences spatial and temporal variability and needs to be optimized by understanding which factors influence recruitment. Here, we investigate the sensitivity of P. margaritifera larval dispersal to both physical and biological factors in the lagoon of Ahe atoll. Coupling a validated 3D larval dispersal model, a bioenergetics larval growth model following the Dynamic Energy Budget (DEB) theory, and a population dynamics model, the variability of lagoon-scale connectivity patterns and recruitment potential is investigated. The relative contribution of reared and wild broodstock to the lagoon-scale recruitment potential is also investigated. Sensitivity analyses pointed out the major effect of the broodstock population structure as well as the sensitivity to larval mortality rate and inter-individual growth variability to larval supply and to the subsequent settlement potential. The application of the growth model clarifies how trophic conditions determine the larval supply and connectivity patterns. These results provide new cues to understand the dynamics of bottom-dwelling populations in atoll lagoons, their recruitment, and discuss how to take advantage of these findings and numerical models for pearl oyster management.

  17. Theoretical insight into the sensitive mechanism of multilayer-shaped cocrystal explosives: compression and slide.

    PubMed

    Gao, Hong-fei; Zhang, Shu-hai; Ren, Fu-de; Gou, Rui-jun; Han, Gang; Wu, Jing-bo; Ding, Xiong; Zhao, Wen-hu

    2016-05-01

    Multilayer-shaped compression and slide models were employed to investigate the complex sensitive mechanisms of cocrystal explosives in response to external mechanical stimuli. Here, density functional theory (DFT) calculations implementing the generalized gradient approximation (GGA) of Perdew-Burke-Ernzerhof (PBE) with the Tkatchenko-Scheffler (TS) dispersion correction were applied to a series of cocrystal explosives: diacetone diperoxide (DADP)/1,3,5-trichloro-2,4,6-trinitrobenzene (TCTNB), DADP/1,3,5-tribromo-2,4,6-trinitrobenzene (TBTNB) and DADP/1,3,5-triiodo-2,4,6-trinitrobenzene (TITNB). The results show that the GGA-PBE-TS method is suitable for calculating these cocrystal systems. Compression and slide models illustrate well the sensitive mechanism of layer-shaped cocrystals of DADP/TCTNB and DADP/TITNB, in accordance with the results from electrostatic potentials and free space per molecule in cocrystal lattice analyses. DADP/TCTNB and DADP/TBTNB prefer sliding along a diagonal direction on the a-c face and generating strong intermolecular repulsions, compared to DADP/TITNB, which slides parallel to the b-c face. The impact sensitivity of DADP/TBTNB is predicted to be the same as that of DADP/TCTNB, and the impact sensitivity of DADP/TBTNB may be slightly more insensitive than that of DADP and much more sensitive than that of TBTNB.

  18. Cost-utility of transcatheter aortic valve implantation for inoperable patients with severe aortic stenosis treated by medical management: a UK cost-utility analysis based on patient-level data from the ADVANCE study

    PubMed Central

    Brecker, Stephen; Mealing, Stuart; Padhiar, Amie; Eaton, James; Sculpher, Mark; Busca, Rachele; Bosmans, Johan; Gerckens, Ulrich J; Wenaweser, Peter; Tamburino, Corrado; Bleiziffer, Sabine; Piazza, Nicolo; Moat, Neil; Linke, Axel

    2014-01-01

    Objective To use patient-level data from the ADVANCE study to evaluate the cost-effectiveness of transcatheter aortic valve implantation (TAVI) compared to medical management (MM) in patients with severe aortic stenosis from the perspective of the UK NHS. Methods A published decision-analytic model was adapted to include information on TAVI from the ADVANCE study. Patient-level data informed the choice as well as the form of mathematical functions that were used to model all-cause mortality, health-related quality of life and hospitalisations. TAVI-related resource use protocols were based on the ADVANCE study. MM was modelled on publicly available information from the PARTNER-B study. The outcome measures were incremental cost-effectiveness ratios (ICERs) estimated at a range of time horizons with benefits expressed as quality-adjusted life-years (QALY). Extensive sensitivity/subgroup analyses were undertaken to explore the impact of uncertainty in key clinical areas. Results Using a 5-year time horizon, the ICER for the comparison of all ADVANCE to all PARTNER-B patients was £13 943 per QALY gained. For the subset of ADVANCE patients classified as high risk (Logistic EuroSCORE >20%) the ICER was £17 718 per QALY gained). The ICER was below £30 000 per QALY gained in all sensitivity analyses relating to choice of MM data source and alternative modelling approaches for key parameters. When the time horizon was extended to 10 years, all ICERs generated in all analyses were below £20 000 per QALY gained. Conclusion TAVI is highly likely to be a cost-effective treatment for patients with severe aortic stenosis. PMID:25349700

  19. Cost-utility of transcatheter aortic valve implantation for inoperable patients with severe aortic stenosis treated by medical management: a UK cost-utility analysis based on patient-level data from the ADVANCE study.

    PubMed

    Brecker, Stephen; Mealing, Stuart; Padhiar, Amie; Eaton, James; Sculpher, Mark; Busca, Rachele; Bosmans, Johan; Gerckens, Ulrich J; Wenaweser, Peter; Tamburino, Corrado; Bleiziffer, Sabine; Piazza, Nicolo; Moat, Neil; Linke, Axel

    2014-01-01

    To use patient-level data from the ADVANCE study to evaluate the cost-effectiveness of transcatheter aortic valve implantation (TAVI) compared to medical management (MM) in patients with severe aortic stenosis from the perspective of the UK NHS. A published decision-analytic model was adapted to include information on TAVI from the ADVANCE study. Patient-level data informed the choice as well as the form of mathematical functions that were used to model all-cause mortality, health-related quality of life and hospitalisations. TAVI-related resource use protocols were based on the ADVANCE study. MM was modelled on publicly available information from the PARTNER-B study. The outcome measures were incremental cost-effectiveness ratios (ICERs) estimated at a range of time horizons with benefits expressed as quality-adjusted life-years (QALY). Extensive sensitivity/subgroup analyses were undertaken to explore the impact of uncertainty in key clinical areas. Using a 5-year time horizon, the ICER for the comparison of all ADVANCE to all PARTNER-B patients was £13 943 per QALY gained. For the subset of ADVANCE patients classified as high risk (Logistic EuroSCORE >20%) the ICER was £17 718 per QALY gained). The ICER was below £30 000 per QALY gained in all sensitivity analyses relating to choice of MM data source and alternative modelling approaches for key parameters. When the time horizon was extended to 10 years, all ICERs generated in all analyses were below £20 000 per QALY gained. TAVI is highly likely to be a cost-effective treatment for patients with severe aortic stenosis.

  20. Assessing the Optimal Position for Vedolizumab in the Treatment of Ulcerative Colitis: A Simulation Model.

    PubMed

    Scott, Frank I; Shah, Yash; Lasch, Karen; Luo, Michelle; Lewis, James D

    2018-01-18

    Vedolizumab, an α4β7 integrin monoclonal antibody inhibiting gut lymphocyte trafficking, is an effective treatment for ulcerative colitis (UC). We evaluated the optimal position of vedolizumab in the UC treatment paradigm. Using Markov modeling, we assessed multiple algorithms for the treatment of UC. The base case was a 35-year-old male with steroid-dependent moderately to severely active UC without previous immunomodulator or biologic use. The model included 4 different algorithms over 1 year, with vedolizumab use prior to: initiating azathioprine (Algorithm 1), combination therapy with infliximab and azathioprine (Algorithm 2), combination therapy with an alternative anti-tumor necrosis factor (anti-TNF) and azathioprine (Algorithm 3), and colectomy (Algorithm 4). Transition probabilities and quality-adjusted life-year (QALY) estimates were derived from the published literature. Primary analyses included simulating 100 trials of 100,000 individuals, assessing clinical outcomes, and QALYs. Sensitivity analyses employed longer time horizons and ranges for all variables. Algorithm 1 (vedolizumab use prior to all other therapies) was the preferred strategy, resulting in 8981 additional individuals in remission, 18 fewer cases of lymphoma, and 1087 fewer serious infections per 100,000 patients compared with last-line use (A4). Algorithm 1 also resulted in 0.0197 to 0.0205 more QALYs compared with other algorithms. This benefit increased with longer time horizons. Algorithm 1 was preferred in all sensitivity analyses. The model suggests that treatment algorithms positioning vedolizumab prior to other therapies should be considered for individuals with moderately to severely active steroid-dependent UC. Further prospective research is needed to confirm these simulated results. © 2018 Crohn’s & Colitis Foundation of America. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  1. Population viability analysis of Lower Missouri River shovelnose sturgeon with initial application to the pallid sturgeon

    USGS Publications Warehouse

    Bajer, P.G.; Wildhaber, M.L.

    2007-01-01

    Demographic models for the shovelnose (Scaphirhynchus platorynchus) and pallid (S. albus) sturgeons in the Lower Missouri River were developed to conduct sensitivity analyses for both populations. Potential effects of increased fishing mortality on the shovelnose sturgeon were also evaluated. Populations of shovelnose and pallid sturgeon were most sensitive to age-0 mortality rates as well as mortality rates of juveniles and young adults. Overall, fecundity was a less sensitive parameter. However, increased fecundity effectively balanced higher mortality among sensitive age classes in both populations. Management that increases population-level fecundity and improves survival of age-0, juveniles, and young adults should most effectively benefit both populations. Evaluation of reproductive values indicated that populations of pallid sturgeon dominated by ages ≥35 could rapidly lose their potential for growth, particularly if recruitment remains low. Under the initial parameter values portraying current conditions the population of shovelnose sturgeon was predicted to decline by 1.65% annually, causing the commercial yield to also decline. Modeling indicated that the commercial yield could increase substantially if exploitation of females in ages ≤12 was highly restricted.

  2. On the aquitard-aquifer interface flow and the drawdown sensitivity with a partially penetrating pumping well in an anisotropic leaky confined aquifer

    NASA Astrophysics Data System (ADS)

    Feng, Qinggao; Zhan, Hongbin

    2015-02-01

    A mathematical model for describing groundwater flow to a partially penetrating pumping well of a finite diameter in an anisotropic leaky confined aquifer is developed. The model accounts for the jointed effects of aquitard storage, aquifer anisotropy, and wellbore storage by treating the aquitard leakage as a boundary condition at the aquitard-aquifer interface rather than a volumetric source/sink term in the governing equation, which has never developed before. A new semi-analytical solution for the model is obtained by the Laplace transform in conjunction with separation of variables. Specific attention was paid on the flow across the aquitard-aquifer interface, which is of concern if aquitard and aquifer have different pore water chemistry. Moreover, Laplace-domain and steady-state solutions are obtained to calculate the rate and volume of (total) leakage through the aquitard-aquifer interface due to pump in a partially penetrating well, which is also useful for engineers to manager water resources. The sensitivity analyses for the drawdown illustrate that the drawdown is most sensitive to the well partial penetration. It is apparently sensitive to the aquifer anisotropic ratio over the entire time of pumping. It is moderately sensitive to the aquitard/aquifer specific storage ratio at the intermediate times only. It is moderately sensitive to the aquitard/aquifer vertical hydraulic conductivity ratio and the aquitard/aquifer thickness ratio with the identical influence at late times.

  3. Economic outcomes of maintenance gefitinib for locally advanced/metastatic non-small-cell lung cancer with unknown EGFR mutations: a semi-Markov model analysis.

    PubMed

    Zeng, Xiaohui; Li, Jianhe; Peng, Liubao; Wang, Yunhua; Tan, Chongqing; Chen, Gannong; Wan, Xiaomin; Lu, Qiong; Yi, Lidan

    2014-01-01

    Maintenance gefitinib significantly prolonged progression-free survival (PFS) compared with placebo in patients from eastern Asian with locally advanced/metastatic non-small-cell lung cancer (NSCLC) after four chemotherapeutic cycles (21 days per cycle) of first-line platinum-based combination chemotherapy without disease progression. The objective of the current study was to evaluate the cost-effectiveness of maintenance gefitinib therapy after four chemotherapeutic cycle's stand first-line platinum-based chemotherapy for patients with locally advanced or metastatic NSCLC with unknown EGFR mutations, from a Chinese health care system perspective. A semi-Markov model was designed to evaluate cost-effectiveness of the maintenance gefitinib treatment. Two-parametric Weibull and Log-logistic distribution were fitted to PFS and overall survival curves independently. One-way and probabilistic sensitivity analyses were conducted to assess the stability of the model designed. The model base-case analysis suggested that maintenance gefitinib would increase benefits in a 1, 3, 6 or 10-year time horizon, with incremental $184,829, $19,214, $19,328, and $21,308 per quality-adjusted life-year (QALY) gained, respectively. The most sensitive influential variable in the cost-effectiveness analysis was utility of PFS plus rash, followed by utility of PFS plus diarrhoea, utility of progressed disease, price of gefitinib, cost of follow-up treatment in progressed survival state, and utility of PFS on oral therapy. The price of gefitinib is the most significant parameter that could reduce the incremental cost per QALY. Probabilistic sensitivity analysis indicated that the cost-effective probability of maintenance gefitinib was zero under the willingness-to-pay (WTP) threshold of $16,349 (3 × per-capita gross domestic product of China). The sensitivity analyses all suggested that the model was robust. Maintenance gefitinib following first-line platinum-based chemotherapy for patients with locally advanced/metastatic NSCLC with unknown EGFR mutations is not cost-effective. Decreasing the price of gefitinib may be a preferential choice for meeting widely treatment demands in China.

  4. Influential input classification in probabilistic multimedia models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maddalena, Randy L.; McKone, Thomas E.; Hsieh, Dennis P.H.

    1999-05-01

    Monte Carlo analysis is a statistical simulation method that is often used to assess and quantify the outcome variance in complex environmental fate and effects models. Total outcome variance of these models is a function of (1) the uncertainty and/or variability associated with each model input and (2) the sensitivity of the model outcome to changes in the inputs. To propagate variance through a model using Monte Carlo techniques, each variable must be assigned a probability distribution. The validity of these distributions directly influences the accuracy and reliability of the model outcome. To efficiently allocate resources for constructing distributions onemore » should first identify the most influential set of variables in the model. Although existing sensitivity and uncertainty analysis methods can provide a relative ranking of the importance of model inputs, they fail to identify the minimum set of stochastic inputs necessary to sufficiently characterize the outcome variance. In this paper, we describe and demonstrate a novel sensitivity/uncertainty analysis method for assessing the importance of each variable in a multimedia environmental fate model. Our analyses show that for a given scenario, a relatively small number of input variables influence the central tendency of the model and an even smaller set determines the shape of the outcome distribution. For each input, the level of influence depends on the scenario under consideration. This information is useful for developing site specific models and improving our understanding of the processes that have the greatest influence on the variance in outcomes from multimedia models.« less

  5. Health care use and costs of adverse drug events emerging from outpatient treatment in Germany: a modelling approach.

    PubMed

    Stark, Renee G; John, Jürgen; Leidl, Reiner

    2011-01-13

    This study's aim was to develop a first quantification of the frequency and costs of adverse drug events (ADEs) originating in ambulatory medical practice in Germany. The frequencies and costs of ADEs were quantified for a base case, building on an existing cost-of-illness model for ADEs. The model originates from the U.S. health care system, its structure of treatment probabilities linked to ADEs was transferred to Germany. Sensitivity analyses based on values determined from a literature review were used to test the postulated results. For Germany, the base case postulated that about 2 million adults ingesting medications have will have an ADE in 2007. Health care costs related to ADEs in this base case totalled 816 million Euros, mean costs per case were 381 Euros. About 58% of costs resulted from hospitalisations, 11% from emergency department visits and 21% from long-term care. Base case estimates of frequency and costs of ADEs were lower than all estimates of the sensitivity analyses. The postulated frequency and costs of ADEs illustrate the possible size of the health problems and economic burden related to ADEs in Germany. The validity of the U.S. treatment structure used remains to be determined for Germany. The sensitivity analysis used assumptions from different studies and thus further quantified the information gap in Germany regarding ADEs. This study found costs of ADEs in the ambulatory setting in Germany to be significant. Due to data scarcity, results are only a rough indication.

  6. Cost-Effectiveness of Dapagliflozin versus Acarbose as a Monotherapy in Type 2 Diabetes in China.

    PubMed

    Gu, Shuyan; Mu, Yiming; Zhai, Suodi; Zeng, Yuhang; Zhen, Xuemei; Dong, Hengjin

    2016-01-01

    To estimate the long-term cost-effectiveness of dapagliflozin versus acarbose as monotherapy in treatment-naïve patients with type 2 diabetes mellitus (T2DM) in China. The Cardiff Diabetes Model, an economic model designed to evaluate the cost-effectiveness of comparator therapies in diabetes was used to simulate disease progression and estimate the long-term effect of treatments on patients. Systematic literature reviews, hospital surveys, meta-analysis and indirect treatment comparison were conducted to obtain model-required patient profiles, clinical data and costs. Health insurance costs (2015¥) were estimated over 40 years from a healthcare payer perspective. Univariate and probabilistic sensitivity analyses were performed. The model predicted that dapagliflozin had lower incidences of cardiovascular events, hypoglycemia and mortality events, was associated with a mean incremental benefit of 0.25 quality-adjusted life-years (QALYs) and with a lower cost of ¥8,439 compared with acarbose. This resulted in a cost saving of ¥33,786 per QALY gained with dapagliflozin. Sensitivity analyses determined that the results are robust. Dapagliflozin is dominant compared with acarbose as monotherapy for Chinese T2DM patients, with a little QALY gain and lower costs. Dapagliflozin offers a well-tolerated and cost-effective alternative medication for treatment-naive patients in China, and may have a direct impact in reducing the disease burden of T2DM.

  7. Estimates of cost-effectiveness of prehospital continuous positive airway pressure in the management of acute pulmonary edema.

    PubMed

    Hubble, Michael W; Richards, Michael E; Wilfong, Denise A

    2008-01-01

    To estimate the cost-effectiveness of continuous positive airway pressure (CPAP) in managing prehospital acute pulmonary edema in an urban EMS system. Using estimates from published reports on prehospital and emergency department CPAP, a cost-effectiveness model of implementing CPAP in a typical urban EMS system was derived from the societal perspective as well as the perspective of the implementing EMS system. To assess the robustness of the model, a series of univariate and multivariate sensitivity analyses was performed on the input variables. The cost of consumables, equipment, and training yielded a total cost of $89 per CPAP application. The theoretical system would be expected to use CPAP 4 times per 1000 EMS patients and is expected to save 0.75 additional lives per 1000 EMS patients at a cost of $490 per life saved. CPAP is also expected to result in approximately one less intubation per 6 CPAP applications and reduce hospitalization costs by $4075 per year for each CPAP application. Through sensitivity analyses the model was verified to be robust across a wide range of input variable assumptions. Previous studies have demonstrated the clinical effectiveness of CPAP in the management of acute pulmonary edema. Through a theoretical analysis which modeled the costs and clinical benefits of implementing CPAP in an urban EMS system, prehospital CPAP appears to be a cost-effective treatment.

  8. Cost-Effectiveness of Dapagliflozin versus Acarbose as a Monotherapy in Type 2 Diabetes in China

    PubMed Central

    Gu, Shuyan; Mu, Yiming; Zhai, Suodi; Zeng, Yuhang; Zhen, Xuemei; Dong, Hengjin

    2016-01-01

    Objective To estimate the long-term cost-effectiveness of dapagliflozin versus acarbose as monotherapy in treatment-naïve patients with type 2 diabetes mellitus (T2DM) in China. Methods The Cardiff Diabetes Model, an economic model designed to evaluate the cost-effectiveness of comparator therapies in diabetes was used to simulate disease progression and estimate the long-term effect of treatments on patients. Systematic literature reviews, hospital surveys, meta-analysis and indirect treatment comparison were conducted to obtain model-required patient profiles, clinical data and costs. Health insurance costs (2015¥) were estimated over 40 years from a healthcare payer perspective. Univariate and probabilistic sensitivity analyses were performed. Results The model predicted that dapagliflozin had lower incidences of cardiovascular events, hypoglycemia and mortality events, was associated with a mean incremental benefit of 0.25 quality-adjusted life-years (QALYs) and with a lower cost of ¥8,439 compared with acarbose. This resulted in a cost saving of ¥33,786 per QALY gained with dapagliflozin. Sensitivity analyses determined that the results are robust. Conclusion Dapagliflozin is dominant compared with acarbose as monotherapy for Chinese T2DM patients, with a little QALY gain and lower costs. Dapagliflozin offers a well-tolerated and cost-effective alternative medication for treatment-naive patients in China, and may have a direct impact in reducing the disease burden of T2DM. PMID:27806087

  9. Sensitivity of Regulated Flow Regimes to Climate Change in the Western United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Tian; Voisin, Nathalie; Leng, Guoyong

    Water management activities or flow regulations modify water fluxes at the land surface and affect water resources in space and time. We hypothesize that flow regulations change the sensitivity of river flow to climate change with respect to unmanaged water resources. Quantifying these changes in sensitivity could help elucidate the impacts of water management at different spatiotemporal scales and inform climate adaptation decisions. In this study, we compared the emergence of significant changes in natural and regulated river flow regimes across the Western United States from simulations driven by multiple climate models and scenarios. We find that significant climate change-inducedmore » alterations in natural flow do not cascade linearly through water management activities. At the annual time scale, 50% of the Hydrologic Unit Code 4 (HUC4) sub-basins over the Western U.S. regions tend to have regulated flow regime more sensitive to the climate change than natural flow regime. Seasonality analyses show that the sensitivity varies remarkably across the seasons. We also find that the sensitivity is related to the level of water management. For 35% of the HUC4 sub-basins with the highest level of water management, the summer and winter flows tend to show a heightened sensitivity to climate change due to the complexity of joint reservoir operations. We further demonstrate that the impacts of considering water management in models are comparable to those that arises from uncertainties across climate models and emission scenarios. This prompts further climate adaptation studies research about nonlinearity effects of climate change through water management activities.« less

  10. Differences in Performance Among Test Statistics for Assessing Phylogenomic Model Adequacy.

    PubMed

    Duchêne, David A; Duchêne, Sebastian; Ho, Simon Y W

    2018-05-18

    Statistical phylogenetic analyses of genomic data depend on models of nucleotide or amino acid substitution. The adequacy of these substitution models can be assessed using a number of test statistics, allowing the model to be rejected when it is found to provide a poor description of the evolutionary process. A potentially valuable use of model-adequacy test statistics is to identify when data sets are likely to produce unreliable phylogenetic estimates, but their differences in performance are rarely explored. We performed a comprehensive simulation study to identify test statistics that are sensitive to some of the most commonly cited sources of phylogenetic estimation error. Our results show that, for many test statistics, traditional thresholds for assessing model adequacy can fail to reject the model when the phylogenetic inferences are inaccurate and imprecise. This is particularly problematic when analysing loci that have few variable informative sites. We propose new thresholds for assessing substitution model adequacy and demonstrate their effectiveness in analyses of three phylogenomic data sets. These thresholds lead to frequent rejection of the model for loci that yield topological inferences that are imprecise and are likely to be inaccurate. We also propose the use of a summary statistic that provides a practical assessment of overall model adequacy. Our approach offers a promising means of enhancing model choice in genome-scale data sets, potentially leading to improvements in the reliability of phylogenomic inference.

  11. The 2006 Kennedy Space Center Range Reference Atmosphere Model Validation Study and Sensitivity Analysis to the Performance of the National Aeronautics and Space Administration's Space Shuttle Vehicle

    NASA Technical Reports Server (NTRS)

    Burns, Lee; Decker, Ryan; Harrington, Brian; Merry, Carl

    2008-01-01

    The Kennedy Space Center (KSC) Range Reference Atmosphere (RRA) is a statistical model that summarizes wind and thermodynamic atmospheric variability from surface to 70 km. The National Aeronautics and Space Administration's (NASA) Space Shuttle program, which launches from KSC, utilizes the KSC RRA data to evaluate environmental constraints on various aspects of the vehicle during ascent. An update to the KSC RRA was recently completed. As part of the update, the Natural Environments Branch at NASA's Marshall Space Flight Center (MSFC) conducted a validation study and a comparison analysis to the existing KSC RRA database version 1983. Assessments to the Space Shuttle vehicle ascent profile characteristics were performed by JSC/Ascent Flight Design Division to determine impacts of the updated model to the vehicle performance. Details on the model updates and the vehicle sensitivity analyses with the update model are presented.

  12. Demonstrating Rapid Qualitative Elemental Analyses of Participant-Supplied Objects at a Public Outreach Event

    ERIC Educational Resources Information Center

    Schwarz, Gunnar; Burger, Marcel; Guex, Kevin; Gundlach-Graham, Alexander; Ka¨ser, Debora; Koch, Joachim; Velicsanyi, Peter; Wu, Chung-Che; Gu¨nther, Detlef; Hattendorf, Bodo

    2016-01-01

    A public demonstration of laser ablation inductively coupled plasma mass spectrometry (LA-ICPMS) for fast and sensitive qualitative elemental analysis of solid everyday objects is described. This demonstration served as a showcase model for modern instrumentation (and for elemental analysis, in particular) to the public. Several steps were made to…

  13. Old Wine in New Skins: The Sensitivity of Established Findings to New Methods

    ERIC Educational Resources Information Center

    Foster, E. Michael; Wiley-Exley, Elizabeth; Bickman, Leonard

    2009-01-01

    Findings from an evaluation of a model system for delivering mental health services to youth were reassessed to determine the robustness of key findings to the use of methodologies unavailable to the original analysts. These analyses address a key concern about earlier findings--that the quasi-experimental design involved the comparison of two…

  14. ASSESSING THE IMPACT OF HUMAN PON1 POLYMORPHISMS: SENSITIVITY AND MONTE CARLO ANALYSES USING A PHYSIOLOGICALLY BASED PHARMACOKINETIC/ PHARMACODYNAMIC (PBPK/PD) MODEL FOR CHLORPYRIFOS. (R828608)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  15. Cost-effectiveness of methadone maintenance therapy as HIV prevention in an Indonesian high-prevalence setting: a mathematical modeling study.

    PubMed

    Wammes, Joost J G; Siregar, Adiatma Y; Hidayat, Teddy; Raya, Reynie P; van Crevel, Reinout; van der Ven, André J; Baltussen, Rob

    2012-09-01

    Indonesia faces an HIV epidemic that is in rapid transition. Injecting drug users (IDUs) are among the most heavily affected risk populations, with estimated prevalence of HIV reaching 50% or more in most parts of the country. Although Indonesia started opening methadone clinics in 2003, coverage remains low. We used the Asian Epidemic Model and Resource Needs Model to evaluate the long-term population-level preventive impact of expanding Methadone Maintenance Therapy (MMT) in West Java (43 million people). We compared intervention costs and the number of incident HIV cases in the intervention scenario with current practice to establish the cost per infection averted by expanding MMT. An extensive sensitivity analysis was performed on costs and epidemiological input, as well as on the cost-effectiveness calculation itself. Our analysis shows that expanding MMT from 5% coverage now to 40% coverage in 2019 would avert approximately 2400 HIV infections, at a cost of approximately US$7000 per HIV infection averted. Sensitivity analyses demonstrate that the use of alternative assumptions does not change the study conclusions. Our analyses suggest that expanding MMT is cost-effective, and support government policies to make MMT widely available as an integrated component of HIV/AIDS control in West Java. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Cost utility analysis of endoscopic biliary stent in unresectable hilar cholangiocarcinoma: decision analytic modeling approach.

    PubMed

    Sangchan, Apichat; Chaiyakunapruk, Nathorn; Supakankunti, Siripen; Pugkhem, Ake; Mairiang, Pisaln

    2014-01-01

    Endoscopic biliary drainage using metal and plastic stent in unresectable hilar cholangiocarcinoma (HCA) is widely used but little is known about their cost-effectiveness. This study evaluated the cost-utility of endoscopic metal and plastic stent drainage in unresectable complex, Bismuth type II-IV, HCA patients. Decision analytic model, Markov model, was used to evaluate cost and quality-adjusted life year (QALY) of endoscopic biliary drainage in unresectable HCA. Costs of treatment and utilities of each Markov state were retrieved from hospital charges and unresectable HCA patients from tertiary care hospital in Thailand, respectively. Transition probabilities were derived from international literature. Base case analyses and sensitivity analyses were performed. Under the base-case analysis, metal stent is more effective but more expensive than plastic stent. An incremental cost per additional QALY gained is 192,650 baht (US$ 6,318). From probabilistic sensitivity analysis, at the willingness to pay threshold of one and three times GDP per capita or 158,000 baht (US$ 5,182) and 474,000 baht (US$ 15,546), the probability of metal stent being cost-effective is 26.4% and 99.8%, respectively. Based on the WHO recommendation regarding the cost-effectiveness threshold criteria, endoscopic metal stent drainage is cost-effective compared to plastic stent in unresectable complex HCA.

  17. Performance of Stratified and Subgrouped Disproportionality Analyses in Spontaneous Databases.

    PubMed

    Seabroke, Suzie; Candore, Gianmario; Juhlin, Kristina; Quarcoo, Naashika; Wisniewski, Antoni; Arani, Ramin; Painter, Jeffery; Tregunno, Philip; Norén, G Niklas; Slattery, Jim

    2016-04-01

    Disproportionality analyses are used in many organisations to identify adverse drug reactions (ADRs) from spontaneous report data. Reporting patterns vary over time, with patient demographics, and between different geographical regions, and therefore subgroup analyses or adjustment by stratification may be beneficial. The objective of this study was to evaluate the performance of subgroup and stratified disproportionality analyses for a number of key covariates within spontaneous report databases of differing sizes and characteristics. Using a reference set of established ADRs, signal detection performance (sensitivity and precision) was compared for stratified, subgroup and crude (unadjusted) analyses within five spontaneous report databases (two company, one national and two international databases). Analyses were repeated for a range of covariates: age, sex, country/region of origin, calendar time period, event seriousness, vaccine/non-vaccine, reporter qualification and report source. Subgroup analyses consistently performed better than stratified analyses in all databases. Subgroup analyses also showed benefits in both sensitivity and precision over crude analyses for the larger international databases, whilst for the smaller databases a gain in precision tended to result in some loss of sensitivity. Additionally, stratified analyses did not increase sensitivity or precision beyond that associated with analytical artefacts of the analysis. The most promising subgroup covariates were age and region/country of origin, although this varied between databases. Subgroup analyses perform better than stratified analyses and should be considered over the latter in routine first-pass signal detection. Subgroup analyses are also clearly beneficial over crude analyses for larger databases, but further validation is required for smaller databases.

  18. Impact and Cost-effectiveness of 3 Doses of 9-Valent Human Papillomavirus (HPV) Vaccine Among US Females Previously Vaccinated With 4-Valent HPV Vaccine.

    PubMed

    Chesson, Harrell W; Laprise, Jean-François; Brisson, Marc; Markowitz, Lauri E

    2016-06-01

    We estimated the potential impact and cost-effectiveness of providing 3-doses of nonavalent human papillomavirus (HPV) vaccine (9vHPV) to females aged 13-18 years who had previously completed a series of quadrivalent HPV vaccine (4vHPV), a strategy we refer to as "additional 9vHPV vaccination." We used 2 distinct models: (1) the simplified model, which is among the most basic of the published dynamic HPV models, and (2) the US HPV-ADVISE model, a complex, stochastic, individual-based transmission-dynamic model. When assuming no 4vHPV cross-protection, the incremental cost per quality-adjusted life-year (QALY) gained by additional 9vHPV vaccination was $146 200 in the simplified model and $108 200 in the US HPV-ADVISE model ($191 800 when assuming 4vHPV cross-protection). In 1-way sensitivity analyses in the scenario of no 4vHPV cross-protection, the simplified model results ranged from $70 300 to $182 000, and the US HPV-ADVISE model results ranged from $97 600 to $118 900. The average cost per QALY gained by additional 9vHPV vaccination exceeded $100 000 in both models. However, the results varied considerably in sensitivity and uncertainty analyses. Additional 9vHPV vaccination is likely not as efficient as many other potential HPV vaccination strategies, such as increasing primary 9vHPV vaccine coverage. Published by Oxford University Press for the Infectious Diseases Society of America 2016. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  19. Race, Ancestry, and Development of Food-Allergen Sensitization in Early Childhood

    PubMed Central

    Tsai, Hui-Ju; Hong, Xiumei; Liu, Xin; Wang, Guoying; Pearson, Colleen; Ortiz, Katherin; Fu, Melanie; Pongracic, Jacqueline A.; Bauchner, Howard; Wang, Xiaobin

    2011-01-01

    OBJECTIVE: We examined whether the risk of food-allergen sensitization varied according to self-identified race or genetic ancestry. METHODS: We studied 1104 children (mean age: 2.7 years) from an urban multiethnic birth cohort. Food sensitization was defined as specific immunoglobulin E (sIgE) levels of ≥0.35 kilo–units of allergen (kUA)/L for any of 8 common food allergens. Multivariate logistic regression analyses were used to evaluate the associations of self-identified race and genetic ancestry with food sensitization. Analyses also examined associations with numbers of food sensitizations (0, 1 or 2, and ≥3 foods) and with logarithmically transformed allergen sIgE levels. RESULTS: In this predominantly minority cohort (60.9% black and 22.5% Hispanic), 35.5% of subjects exhibited food sensitizations. In multivariate models, both self-reported black race (odds ratio [OR]: 2.34 [95% confidence interval [CI]: 1.24–4.44]) and African ancestry (in 10% increments; OR: 1.07 [95% CI: 1.02–1.14]) were associated with food sensitization. Self-reported black race (OR: 3.76 [95% CI: 1.09–12.97]) and African ancestry (OR: 1.19 [95% CI: 1.07–1.32]) were associated with a high number (≥3) of food sensitizations. African ancestry was associated with increased odds of peanut sIgE levels of ≥5 kUA/L (OR: 1.25 [95% CI: 1.01–1.52]). Similar ancestry associations were seen for egg sIgE levels of ≥2 kUA/L (OR: 1.13 [95% CI: 1.01–1.27]) and milk sIgE levels of ≥5 kUA/L (OR: 1.24 [95% CI: 0.94–1.63]), although findings were not significant for milk. CONCLUSIONS: Black children were more likely to be sensitized to food allergens and were sensitized to more foods. African ancestry was associated with peanut sensitization. PMID:21890831

  20. Risk Factors Predicting Infectious Lactational Mastitis: Decision Tree Approach versus Logistic Regression Analysis.

    PubMed

    Fernández, Leónides; Mediano, Pilar; García, Ricardo; Rodríguez, Juan M; Marín, María

    2016-09-01

    Objectives Lactational mastitis frequently leads to a premature abandonment of breastfeeding; its development has been associated with several risk factors. This study aims to use a decision tree (DT) approach to establish the main risk factors involved in mastitis and to compare its performance for predicting this condition with a stepwise logistic regression (LR) model. Methods Data from 368 cases (breastfeeding women with mastitis) and 148 controls were collected by a questionnaire about risk factors related to medical history of mother and infant, pregnancy, delivery, postpartum, and breastfeeding practices. The performance of the DT and LR analyses was compared using the area under the receiver operating characteristic (ROC) curve. Sensitivity, specificity and accuracy of both models were calculated. Results Cracked nipples, antibiotics and antifungal drugs during breastfeeding, infant age, breast pumps, familial history of mastitis and throat infection were significant risk factors associated with mastitis in both analyses. Bottle-feeding and milk supply were related to mastitis for certain subgroups in the DT model. The areas under the ROC curves were similar for LR and DT models (0.870 and 0.835, respectively). The LR model had better classification accuracy and sensitivity than the DT model, but the last one presented better specificity at the optimal threshold of each curve. Conclusions The DT and LR models constitute useful and complementary analytical tools to assess the risk of lactational infectious mastitis. The DT approach identifies high-risk subpopulations that need specific mastitis prevention programs and, therefore, it could be used to make the most of public health resources.

  1. Hydrologic climate change impacts in the Columbia River Basin and their sensitivity to methodological choices

    NASA Astrophysics Data System (ADS)

    Chegwidden, O.; Nijssen, B.; Mao, Y.; Rupp, D. E.

    2016-12-01

    The Columbia River Basin (CRB) in the United States' Pacific Northwest (PNW) is highly regulated for hydropower generation, flood control, fish survival, irrigation and navigation. Historically it has had a hydrologic regime characterized by winter precipitation in the form of snow, followed by a spring peak in streamflow from snowmelt. Anthropogenic climate change is expected to significantly alter this regime, causing changes to streamflow timing and volume. While numerous hydrologic studies have been conducted across the CRB, the impact of methodological choices in hydrologic modeling has not been as heavily investigated. To better understand their impact on the spread in modeled projections of hydrological change, we ran simulations involving permutations of a variety of methodological choices. We used outputs from ten global climate models (GCMs) and two representative concentration pathways from the Intergovernmental Panel on Climate Change's Fifth Assessment Report. After downscaling the GCM output using three different techniques we forced the Variable Infiltration Capacity (VIC) model and the Precipitation Runoff Modeling System (PRMS), both implemented at 1/16th degree ( 5 km) for the period 1950-2099. For the VIC model, we used three independently-derived parameter sets. We will show results from the range of simulations, both in the form of basin-wide spatial analyses of hydrologic variables and through analyses of changes in streamflow at selected sites throughout the CRB. We will then discuss the differences in sensitivities to climate change seen among the projections, paying particular attention to differences in projections from the hydrologic models and different parameter sets.

  2. Modified Petri net model sensitivity to workload manipulations

    NASA Technical Reports Server (NTRS)

    White, S. A.; Mackinnon, D. P.; Lyman, J.

    1986-01-01

    Modified Petri Nets (MPNs) are investigated as a workload modeling tool. The results of an exploratory study of the sensitivity of MPNs to work load manipulations in a dual task are described. Petri nets have been used to represent systems with asynchronous, concurrent and parallel activities (Peterson, 1981). These characteristics led some researchers to suggest the use of Petri nets in workload modeling where concurrent and parallel activities are common. Petri nets are represented by places and transitions. In the workload application, places represent operator activities and transitions represent events. MPNs have been used to formally represent task events and activities of a human operator in a man-machine system. Some descriptive applications demonstrate the usefulness of MPNs in the formal representation of systems. It is the general hypothesis herein that in addition to descriptive applications, MPNs may be useful for workload estimation and prediction. The results are reported of the first of a series of experiments designed to develop and test a MPN system of workload estimation and prediction. This first experiment is a screening test of MPN model general sensitivity to changes in workload. Positive results from this experiment will justify the more complicated analyses and techniques necessary for developing a workload prediction system.

  3. Incorporating uncertainty of management costs in sensitivity analyses of matrix population models.

    PubMed

    Salomon, Yacov; McCarthy, Michael A; Taylor, Peter; Wintle, Brendan A

    2013-02-01

    The importance of accounting for economic costs when making environmental-management decisions subject to resource constraints has been increasingly recognized in recent years. In contrast, uncertainty associated with such costs has often been ignored. We developed a method, on the basis of economic theory, that accounts for the uncertainty in population-management decisions. We considered the case where, rather than taking fixed values, model parameters are random variables that represent the situation when parameters are not precisely known. Hence, the outcome is not precisely known either. Instead of maximizing the expected outcome, we maximized the probability of obtaining an outcome above a threshold of acceptability. We derived explicit analytical expressions for the optimal allocation and its associated probability, as a function of the threshold of acceptability, where the model parameters were distributed according to normal and uniform distributions. To illustrate our approach we revisited a previous study that incorporated cost-efficiency analyses in management decisions that were based on perturbation analyses of matrix population models. Incorporating derivations from this study into our framework, we extended the model to address potential uncertainties. We then applied these results to 2 case studies: management of a Koala (Phascolarctos cinereus) population and conservation of an olive ridley sea turtle (Lepidochelys olivacea) population. For low aspirations, that is, when the threshold of acceptability is relatively low, the optimal strategy was obtained by diversifying the allocation of funds. Conversely, for high aspirations, the budget was directed toward management actions with the highest potential effect on the population. The exact optimal allocation was sensitive to the choice of uncertainty model. Our results highlight the importance of accounting for uncertainty when making decisions and suggest that more effort should be placed on understanding the distributional characteristics of such uncertainty. Our approach provides a tool to improve decision making. © 2013 Society for Conservation Biology.

  4. Effectiveness of a worksite mindfulness-based multi-component intervention on lifestyle behaviors

    PubMed Central

    2014-01-01

    Introduction Overweight and obesity are associated with an increased risk of morbidity. Mindfulness training could be an effective strategy to optimize lifestyle behaviors related to body weight gain. The aim of this study was to evaluate the effectiveness of a worksite mindfulness-based multi-component intervention on vigorous physical activity in leisure time, sedentary behavior at work, fruit intake and determinants of these behaviors. The control group received information on existing lifestyle behavior- related facilities that were already available at the worksite. Methods In a randomized controlled trial design (n = 257), 129 workers received a mindfulness training, followed by e-coaching, lunch walking routes and fruit. Outcome measures were assessed at baseline and after 6 and 12 months using questionnaires. Physical activity was also measured using accelerometers. Effects were analyzed using linear mixed effect models according to the intention-to-treat principle. Linear regression models (complete case analyses) were used as sensitivity analyses. Results There were no significant differences in lifestyle behaviors and determinants of these behaviors between the intervention and control group after 6 or 12 months. The sensitivity analyses showed effect modification for gender in sedentary behavior at work at 6-month follow-up, although the main analyses did not. Conclusions This study did not show an effect of a worksite mindfulness-based multi-component intervention on lifestyle behaviors and behavioral determinants after 6 and 12 months. The effectiveness of a worksite mindfulness-based multi-component intervention as a health promotion intervention for all workers could not be established. PMID:24467802

  5. Brief Report: Effects of Sensory Sensitivity and Intolerance of Uncertainty on Anxiety in Mothers of Children with Autism Spectrum Disorder.

    PubMed

    Uljarević, Mirko; Carrington, Sarah; Leekam, Susan

    2016-01-01

    This study examined the relations between anxiety and individual characteristics of sensory sensitivity (SS) and intolerance of uncertainty (IU) in mothers of children with ASD. The mothers of 50 children completed the Hospital Anxiety and Depression Scale, the Highly Sensitive Person Scale and the IU Scale. Anxiety was associated with both SS and IU and IU was also associated with SS. Mediation analyses showed direct effects between anxiety and both IU and SS but a significant indirect effect was found only in the model in which IU mediated between SS. This is the first study to characterize the nature of the IU and SS interrelation in predicting levels of anxiety.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobromir Panayotov; Andrew Grief; Brad J. Merrill

    'Fusion for Energy' (F4E) develops designs and implements the European Test Blanket Systems (TBS) in ITER - Helium-Cooled Lithium-Lead (HCLL) and Helium-Cooled Pebble-Bed (HCPB). Safety demonstration is an essential element for the integration of TBS in ITER and accident analyses are one of its critical segments. A systematic approach to the accident analyses had been acquired under the F4E contract on TBS safety analyses. F4E technical requirements and AMEC and INL efforts resulted in the development of a comprehensive methodology for fusion breeding blanket accident analyses. It addresses the specificity of the breeding blankets design, materials and phenomena and atmore » the same time is consistent with the one already applied to ITER accident analyses. Methodology consists of several phases. At first the reference scenarios are selected on the base of FMEA studies. In the second place elaboration of the accident analyses specifications we use phenomena identification and ranking tables to identify the requirements to be met by the code(s) and TBS models. Thus the limitations of the codes are identified and possible solutions to be built into the models are proposed. These include among others the loose coupling of different codes or code versions in order to simulate multi-fluid flows and phenomena. The code selection and issue of the accident analyses specifications conclude this second step. Furthermore the breeding blanket and ancillary systems models are built on. In this work challenges met and solutions used in the development of both MELCOR and RELAP5 codes models of HCLL and HCPB TBSs will be shared. To continue the developed models are qualified by comparison with finite elements analyses, by code to code comparison and sensitivity studies. Finally, the qualified models are used for the execution of the accident analyses of specific scenario. When possible the methodology phases will be illustrated in the paper by limited number of tables and figures. Description of each phase and its results in detail as well the methodology applications to EU HCLL and HCPB TBSs will be published in separate papers. The developed methodology is applicable to accident analyses of other TBSs to be tested in ITER and as well to DEMO breeding blankets.« less

  7. Development and validation of a prediction model for measurement variability of lung nodule volumetry in patients with pulmonary metastases.

    PubMed

    Hwang, Eui Jin; Goo, Jin Mo; Kim, Jihye; Park, Sang Joon; Ahn, Soyeon; Park, Chang Min; Shin, Yeong-Gil

    2017-08-01

    To develop a prediction model for the variability range of lung nodule volumetry and validate the model in detecting nodule growth. For model development, 50 patients with metastatic nodules were prospectively included. Two consecutive CT scans were performed to assess volumetry for 1,586 nodules. Nodule volume, surface voxel proportion (SVP), attachment proportion (AP) and absolute percentage error (APE) were calculated for each nodule and quantile regression analyses were performed to model the 95% percentile of APE. For validation, 41 patients who underwent metastasectomy were included. After volumetry of resected nodules, sensitivity and specificity for diagnosis of metastatic nodules were compared between two different thresholds of nodule growth determination: uniform 25% volume change threshold and individualized threshold calculated from the model (estimated 95% percentile APE). SVP and AP were included in the final model: Estimated 95% percentile APE = 37.82 · SVP + 48.60 · AP-10.87. In the validation session, the individualized threshold showed significantly higher sensitivity for diagnosis of metastatic nodules than the uniform 25% threshold (75.0% vs. 66.0%, P = 0.004) CONCLUSION: Estimated 95% percentile APE as an individualized threshold of nodule growth showed greater sensitivity in diagnosing metastatic nodules than a global 25% threshold. • The 95 % percentile APE of a particular nodule can be predicted. • Estimated 95 % percentile APE can be utilized as an individualized threshold. • More sensitive diagnosis of metastasis can be made with an individualized threshold. • Tailored nodule management can be provided during nodule growth follow-up.

  8. Developing physical exposure-based back injury risk models applicable to manual handling jobs in distribution centers.

    PubMed

    Lavender, Steven A; Marras, William S; Ferguson, Sue A; Splittstoesser, Riley E; Yang, Gang

    2012-01-01

    Using our ultrasound-based "Moment Monitor," exposures to biomechanical low back disorder risk factors were quantified in 195 volunteers who worked in 50 different distribution center jobs. Low back injury rates, determined from a retrospective examination of each company's Occupational Safety and Health Administration (OSHA) 300 records over the 3-year period immediately prior to data collection, were used to classify each job's back injury risk level. The analyses focused on the factors differentiating the high-risk jobs (those having had 12 or more back injuries/200,000 hr of exposure) from the low-risk jobs (those defined as having no back injuries in the preceding 3 years). Univariate analyses indicated that measures of load moment exposure and force application could distinguish between high (n = 15) and low (n = 15) back injury risk distribution center jobs. A three-factor multiple logistic regression model capable of predicting high-risk jobs with very good sensitivity (87%) and specificity (73%) indicated that risk could be assessed using the mean across the sampled lifts of the peak forward and or lateral bending dynamic load moments that occurred during each lift, the mean of the peak push/pull forces across the sampled lifts, and the mean duration of the non-load exposure periods. A surrogate model, one that does not require the Moment Monitor equipment to assess a job's back injury risk, was identified although with some compromise in model sensitivity relative to the original model.

  9. Predicted effect size of lisdexamfetamine treatment of attention deficit/hyperactivity disorder (ADHD) in European adults: Estimates based on indirect analysis using a systematic review and meta-regression analysis.

    PubMed

    Fridman, M; Hodgkins, P S; Kahle, J S; Erder, M H

    2015-06-01

    There are few approved therapies for adults with attention-deficit/hyperactivity disorder (ADHD) in Europe. Lisdexamfetamine (LDX) is an effective treatment for ADHD; however, no clinical trials examining the efficacy of LDX specifically in European adults have been conducted. Therefore, to estimate the efficacy of LDX in European adults we performed a meta-regression of existing clinical data. A systematic review identified US- and Europe-based randomized efficacy trials of LDX, atomoxetine (ATX), or osmotic-release oral system methylphenidate (OROS-MPH) in children/adolescents and adults. A meta-regression model was then fitted to the published/calculated effect sizes (Cohen's d) using medication, geographical location, and age group as predictors. The LDX effect size in European adults was extrapolated from the fitted model. Sensitivity analyses performed included using adult-only studies and adding studies with placebo designs other than a standard pill-placebo design. Twenty-two of 2832 identified articles met inclusion criteria. The model-estimated effect size of LDX for European adults was 1.070 (95% confidence interval: 0.738, 1.401), larger than the 0.8 threshold for large effect sizes. The overall model fit was adequate (80%) and stable in the sensitivity analyses. This model predicts that LDX may have a large treatment effect size in European adults with ADHD. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  10. Examining the potential role of a supervised injection facility in Saskatoon, Saskatchewan, to avert HIV among people who inject drugs

    PubMed Central

    Jozaghi, Ehsan; Jackson, Asheka

    2015-01-01

    Background: Research predicting the public health and fiscal impact of Supervised Injection Facilities (SIFs), across different cities in Canada, has reported positive results on the reduction of HIV cases among People Who Inject Drugs (PWID). Most of the existing studies have focused on the outcomes of Insite, located in the Vancouver Downtown Eastside (DTES). Previous attention has not been afforded to other affected areas of Canada. The current study seeks to address this deficiency by assessing the cost-effectiveness of opening a SIF in Saskatoon, Saskatchewan. Methods: We used two different mathematical models commonly used in the literature, including sensitivity analyses, to estimate the number of HIV infections averted due to the establishment of a SIF in the city of Saskatoon, Saskatchewan. Results: Based on cumulative cost-effectiveness results, SIF establishment is cost-effective. The benefit to cost ratio was conservatively estimated to be 1.35 for the first two potential facilities. The study relied on 34% and 14% needle sharing rates for sensitivity analyses. The result for both sensitivity analyses and the base line estimates indicated positive prospects for the establishment of a SIF in Saskatoon. Conclusion: The opening of a SIF in Saskatoon, Saskatchewan is financially prudent in the reduction of tax payers’ expenses and averting HIV infection rates among PWID PMID:26029896

  11. Ecological Sensitivity Evaluation of Tourist Region Based on Remote Sensing Image - Taking Chaohu Lake Area as a Case Study

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Li, W. J.; Yu, J.; Wu, C. Z.

    2018-04-01

    Remote sensing technology is of significant advantages for monitoring and analysing ecological environment. By using of automatic extraction algorithm, various environmental resources information of tourist region can be obtained from remote sensing imagery. Combining with GIS spatial analysis and landscape pattern analysis, relevant environmental information can be quantitatively analysed and interpreted. In this study, taking the Chaohu Lake Basin as an example, Landsat-8 multi-spectral satellite image of October 2015 was applied. Integrated the automatic ELM (Extreme Learning Machine) classification results with the data of digital elevation model and slope information, human disturbance degree, land use degree, primary productivity, landscape evenness , vegetation coverage, DEM, slope and normalized water body index were used as the evaluation factors to construct the eco-sensitivity evaluation index based on AHP and overlay analysis. According to the value of eco-sensitivity evaluation index, by using of GIS technique of equal interval reclassification, the Chaohu Lake area was divided into four grades: very sensitive area, sensitive area, sub-sensitive areas and insensitive areas. The results of the eco-sensitivity analysis shows: the area of the very sensitive area was 4577.4378 km2, accounting for about 37.12 %, the sensitive area was 5130.0522 km2, accounting for about 37.12 %; the area of sub-sensitive area was 3729.9312 km2, accounting for 26.99 %; the area of insensitive area was 382.4399 km2, accounting for about 2.77 %. At the same time, it has been found that there were spatial differences in ecological sensitivity of the Chaohu Lake basin. The most sensitive areas were mainly located in the areas with high elevation and large terrain gradient. Insensitive areas were mainly distributed in slope of the slow platform area; the sensitive areas and the sub-sensitive areas were mainly agricultural land and woodland. Through the eco-sensitivity analysis of the study area, the automatic recognition and analysis techniques for remote sensing imagery are integrated into the ecological analysis and ecological regional planning, which can provide a reliable scientific basis for rational planning and regional sustainable development of the Chaohu Lake tourist area.

  12. Mind matters: A meta-analysis on parental mentalization and sensitivity as predictors of infant-parent attachment.

    PubMed

    Zeegers, Moniek A J; Colonnesi, Cristina; Stams, Geert-Jan J M; Meins, Elizabeth

    2017-12-01

    Major developments in attachment research over the past 2 decades have introduced parental mentalization as a predictor of infant-parent attachment security. Parental mentalization is the degree to which parents show frequent, coherent, or appropriate appreciation of their infants' internal states. The present study examined the triangular relations between parental mentalization, parental sensitivity, and attachment security. A total of 20 effect sizes (N = 974) on the relation between parental mentalization and attachment, 82 effect sizes (N = 6,664) on the relation between sensitivity and attachment, and 24 effect sizes (N = 2,029) on the relation between mentalization and sensitivity were subjected to multilevel meta-analyses. The results showed a pooled correlation of r = .30 between parental mentalization and infant attachment security, and rs of .25 for the correlations between sensitivity and attachment security, and between parental mentalization and sensitivity. A meta-analytic structural equation model was performed to examine the combined effects of mentalization and sensitivity as predictors of infant attachment. Together, the predictors explained 12% of the variance in attachment security. After controlling for the effect of sensitivity, the relation between parental mentalization and attachment remained, r = .24; the relation between sensitivity and attachment remained after controlling for parental mentalization, r = .19. Sensitivity also mediated the relation between parental mentalization and attachment security, r = .07, suggesting that mentalization exerts both direct and indirect influences on attachment security. The results imply that parental mentalization should be incorporated into existing models that map the predictors of infant-parent attachment. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  13. Cost effectiveness of memantine in Alzheimer's disease: an analysis based on a probabilistic Markov model from a UK perspective.

    PubMed

    Jones, Roy W; McCrone, Paul; Guilhaume, Chantal

    2004-01-01

    Clinical trials with memantine, an uncompetitive moderate-affinity NMDA antagonist, have shown improved clinical outcomes, increased independence and a trend towards delayed institutionalisation in patients with moderately severe-to-severe Alzheimer's disease. In a randomised double-blind, placebo-controlled, 28-week study conducted in the US, reductions in resource utilisation and total healthcare costs were noted with memantine relative to placebo. While these findings suggest that, compared with placebo, memantine provides cost savings, further analyses may help to quantify potential economic gains over a longer treatment period. To evaluate the cost effectiveness of memantine therapy compared with no pharmacological treatment in patients with moderately severe-to-severe Alzheimer's disease over a 2-year period. A Markov model was constructed to simulate patient progression through a series of health states related to severity, dependency (determined by patient scores on the Alzheimer's Disease Cooperative Study-Activities of Daily Living [ADCS-ADL] inventory and residential status ('institutionalisation') with a time horizon of 2 years (each 6-month Markov cycle was repeated four times). Transition probabilities from one health state to another 6 months later were mainly derived from a 28-week, randomised, double-blind, placebo-controlled clinical trial. Inputs related to epidemiological and cost data were derived from a UK longitudinal epidemiological study, while data on quality-adjusted life-years (QALYs) were derived from a Danish longitudinal study. To ensure conservative estimates from the model, the base case analysis assumed drug effectiveness was limited to 12 months. Monte Carlo simulations were performed for each state parameter following definition of a priori distributions for the main variables of the model. Sensitivity analyses included worst case scenario in which memantine was effective for 6 months and one-way sensitivity analyses on key parameters. Finally, a subgroup analysis was performed to determine which patients were most likely to benefit from memantine. Informal care was not included in this model as the costs were considered from National Health Service and Personal Social Services perspective. The base case analysis found that, compared with no treatment, memantine was associated with lower costs and greater clinical effectiveness in terms of years of independence, years in the community and QALYs. Sensitivity analyses supported these findings. For each category of Alzheimer's disease patient examined, treatment with memantine was a cost-effective strategy. The greatest economic gain of memantine treatment was in independent patients with a Mini-Mental State Examination score of > or =10. This model suggests that memantine treatment is cost effective and provides cost savings compared with no pharmacological treatment. These benefits appear to result from prolonged patient independence and delayed institutionalisation for moderately severe and severe Alzheimer's disease patients on memantine compared with no pharmacological treatment.

  14. Validation and Sensitivity Analysis of a New Atmosphere-Soil-Vegetation Model.

    NASA Astrophysics Data System (ADS)

    Nagai, Haruyasu

    2002-02-01

    This paper describes details, validation, and sensitivity analysis of a new atmosphere-soil-vegetation model. The model consists of one-dimensional multilayer submodels for atmosphere, soil, and vegetation and radiation schemes for the transmission of solar and longwave radiations in canopy. The atmosphere submodel solves prognostic equations for horizontal wind components, potential temperature, specific humidity, fog water, and turbulence statistics by using a second-order closure model. The soil submodel calculates the transport of heat, liquid water, and water vapor. The vegetation submodel evaluates the heat and water budget on leaf surface and the downward liquid water flux. The model performance was tested by using measured data of the Cooperative Atmosphere-Surface Exchange Study (CASES). Calculated ground surface fluxes were mainly compared with observations at a winter wheat field, concerning the diurnal variation and change in 32 days of the first CASES field program in 1997, CASES-97. The measured surface fluxes did not satisfy the energy balance, so sensible and latent heat fluxes obtained by the eddy correlation method were corrected. By using options of the solar radiation scheme, which addresses the effect of the direct solar radiation component, calculated albedo agreed well with the observations. Some sensitivity analyses were also done for model settings. Model calculations of surface fluxes and surface temperature were in good agreement with measurements as a whole.

  15. Logit and probit model in toll sensitivity analysis of Solo-Ngawi, Kartasura-Palang Joglo segment based on Willingness to Pay (WTP)

    NASA Astrophysics Data System (ADS)

    Handayani, Dewi; Cahyaning Putri, Hera; Mahmudah, AMH

    2017-12-01

    Solo-Ngawi toll road project is part of the mega project of the Trans Java toll road development initiated by the government and is still under construction until now. PT Solo Ngawi Jaya (SNJ) as the Solo-Ngawi toll management company needs to determine the toll fare that is in accordance with the business plan. The determination of appropriate toll rates will affect progress in regional economic sustainability and decrease the traffic congestion. These policy instruments is crucial for achieving environmentally sustainable transport. Therefore, the objective of this research is to find out how the toll fare sensitivity of Solo-Ngawi toll road based on Willingness To Pay (WTP). Primary data was obtained by distributing stated preference questionnaires to four wheeled vehicle users in Kartasura-Palang Joglo artery road segment. Further data obtained will be analysed with logit and probit model. Based on the analysis, it is found that the effect of fare change on the amount of WTP on the binomial logit model is more sensitive than the probit model on the same travel conditions. The range of tariff change against values of WTP on the binomial logit model is 20% greater than the range of values in the probit model . On the other hand, the probability results of the binomial logit model and the binary probit have no significant difference (less than 1%).

  16. Modal Test/Analysis Correlation of Space Station Structures Using Nonlinear Sensitivity

    NASA Technical Reports Server (NTRS)

    Gupta, Viney K.; Newell, James F.; Berke, Laszlo; Armand, Sasan

    1992-01-01

    The modal correlation problem is formulated as a constrained optimization problem for validation of finite element models (FEM's). For large-scale structural applications, a pragmatic procedure for substructuring, model verification, and system integration is described to achieve effective modal correlation. The space station substructure FEM's are reduced using Lanczos vectors and integrated into a system FEM using Craig-Bampton component modal synthesis. The optimization code is interfaced with MSC/NASTRAN to solve the problem of modal test/analysis correlation; that is, the problem of validating FEM's for launch and on-orbit coupled loads analysis against experimentally observed frequencies and mode shapes. An iterative perturbation algorithm is derived and implemented to update nonlinear sensitivity (derivatives of eigenvalues and eigenvectors) during optimizer iterations, which reduced the number of finite element analyses.

  17. Modal test/analysis correlation of Space Station structures using nonlinear sensitivity

    NASA Technical Reports Server (NTRS)

    Gupta, Viney K.; Newell, James F.; Berke, Laszlo; Armand, Sasan

    1992-01-01

    The modal correlation problem is formulated as a constrained optimization problem for validation of finite element models (FEM's). For large-scale structural applications, a pragmatic procedure for substructuring, model verification, and system integration is described to achieve effective modal correlations. The space station substructure FEM's are reduced using Lanczos vectors and integrated into a system FEM using Craig-Bampton component modal synthesis. The optimization code is interfaced with MSC/NASTRAN to solve the problem of modal test/analysis correlation; that is, the problem of validating FEM's for launch and on-orbit coupled loads analysis against experimentally observed frequencies and mode shapes. An iterative perturbation algorithm is derived and implemented to update nonlinear sensitivity (derivatives of eigenvalues and eigenvectors) during optimizer iterations, which reduced the number of finite element analyses.

  18. NEMOTAM: tangent and adjoint models for the ocean modelling platform NEMO

    NASA Astrophysics Data System (ADS)

    Vidard, A.; Bouttier, P.-A.; Vigilant, F.

    2015-04-01

    Tangent linear and adjoint models (TAMs) are efficient tools to analyse and to control dynamical systems such as NEMO. They can be involved in a large range of applications such as sensitivity analysis, parameter estimation or the computation of characteristic vectors. A TAM is also required by the 4D-Var algorithm, which is one of the major methods in data assimilation. This paper describes the development and the validation of the tangent linear and adjoint model for the NEMO ocean modelling platform (NEMOTAM). The diagnostic tools that are available alongside NEMOTAM are detailed and discussed, and several applications are also presented.

  19. NEMOTAM: tangent and adjoint models for the ocean modelling platform NEMO

    NASA Astrophysics Data System (ADS)

    Vidard, A.; Bouttier, P.-A.; Vigilant, F.

    2014-10-01

    The tangent linear and adjoint model (TAM) are efficient tools to analyse and to control dynamical systems such as NEMO. They can be involved in a large range of applications such as sensitivity analysis, parameter estimation or the computation of characteristics vectors. TAM is also required by the 4-D-VAR algorithm which is one of the major method in Data Assimilation. This paper describes the development and the validation of the Tangent linear and Adjoint Model for the NEMO ocean modelling platform (NEMOTAM). The diagnostic tools that are available alongside NEMOTAM are detailed and discussed and several applications are also presented.

  20. Cost-effectiveness of 64-slice CT angiography compared to conventional coronary angiography based on a coverage with evidence development study in Ontario.

    PubMed

    Goeree, Ron; Blackhouse, Gord; Bowen, James M; O'Reilly, Daria; Sutherland, Simone; Hopkins, Robert; Chow, Benjamin; Freeman, Michael; Provost, Yves; Dennie, Carole; Cohen, Eric; Marcuzzi, Dan; Iwanochko, Robert; Moody, Alan; Paul, Narinder; Parker, John D

    2013-10-01

    Conventional coronary angiography (CCA) is the standard diagnostic for coronary artery disease (CAD), but multi-detector computed tomography coronary angiography (CTCA) is a non-invasive alternative. A multi-center coverage with evidence development study was undertaken and combined with an economic model to estimate the cost-effectiveness of CTCA followed by CCA vs CCA alone. Alternative assumptions were tested in patient scenario and sensitivity analyses. CCA was found to dominate CTCA, however, CTCA was relatively more cost-effective in females, in advancing age, in patients with lower pre-test probabilities of CAD, the higher the sensitivity of CTCA and the lower the probability of undergoing a confirmatory CCA following a positive CTCA. RESULTS were very sensitive to alternative patient populations and modeling assumptions. Careful consideration of patient characteristics, procedures to improve the diagnostic yield of CTCA and selective use of CCA following CTCA will impact whether CTCA is cost-effective or dominates CCA.

  1. Optics of retinal oil droplets: a model of light collection and polarization detection in the avian retina.

    PubMed

    Young, S R; Martin, G R

    1984-01-01

    A wave optical model was used to analyse the scattering properties of avian retinal oil droplets. Computations for the near field region showed that oil droplets perform significant light collection in cone photoreceptors and so enhance outer segment photon capture rates. Scattering by the oil droplet of the principal cone of a double cone pair, combined with accessory cone dichroic absorption under conditions of transverse illumination, may mediate avian polarization sensitivity.

  2. System cost performance analysis (study 2.3). Volume 1: Executive summary. [unmanned automated payload programs and program planning

    NASA Technical Reports Server (NTRS)

    Campbell, B. H.

    1974-01-01

    A study is described which was initiated to identify and quantify the interrelationships between and within the performance, safety, cost, and schedule parameters for unmanned, automated payload programs. The result of the investigation was a systems cost/performance model which was implemented as a digital computer program and could be used to perform initial program planning, cost/performance tradeoffs, and sensitivity analyses for mission model and advanced payload studies. Program objectives and results are described briefly.

  3. Analysing uncertainties of supply and demand in the future use of hydrogen as an energy vector

    NASA Astrophysics Data System (ADS)

    Lenel, U. R.; Davies, D. G. S.; Moore, M. A.

    An analytical technique (Analysis with Uncertain Qualities), developed at Fulmer, is being used to examine the sensitivity of the outcome to uncertainties in input quantities in order to highlight which input quantities critically affect the potential role of hydrogen. The work presented here includes an outline of the model and the analysis technique, along with basic considerations of the input quantities to the model (demand, supply and constraints). Some examples are given of probabilistic estimates of input quantities.

  4. Commercial aspects of semi-reusable launch systems

    NASA Astrophysics Data System (ADS)

    Obersteiner, M. H.; Müller, H.; Spies, H.

    2003-07-01

    This paper presents a business planning model for a commercial space launch system. The financing model is based on market analyses and projections combined with market capture models. An operations model is used to derive the annual cash income. Parametric cost modeling, development and production schedules are used for quantifying the annual expenditures, the internal rate of return, break even point of positive cash flow and the respective prices per launch. Alternative consortia structures, cash flow methods, capture rates and launch prices are used to examine the sensitivity of the model. Then the model is applied for a promising semi-reusable launcher concept, showing the general achievability of the commercial approach and the necessary pre-conditions.

  5. Development of a Self-Determination Measure for College Students: Validity Evidence for the Basic Needs Satisfaction at College Scale

    ERIC Educational Resources Information Center

    Jenkins-Guarnieri, Michael A.; Vaughan, Angela L.; Wright, Stephen L.

    2015-01-01

    We adapted a work self-determination measure to create the Basic Needs Satisfaction at College Scale. Confirmatory factor analysis and item response theory analyses with data from 525 adults supported a 3-factor model with 13 items most sensitive for lower to middle range levels of the autonomy, competence, and relatedness constructs.

  6. Variable temperature sensitivity of soil organic carbon in North American forests

    Treesearch

    Cinzia Fissore; Christian P. Giardina; Christopher W. Swanston; Gary M. King; Randall K. Kolka

    2009-01-01

    We investigated mean residence time (MRT) for soil organic carbon (SOC) sampled from paired hardwood and pine forests located along a 22 °C mean annual temperature (MAT) gradient in North America. We used acid hydrolysis fractionation, radiocarbon analyses, long-term laboratory incubations (525-d), and a three-pool model to describe the size and kinetics of...

  7. Comparative Analyses of Zebrafish Anxiety-Like Behavior Using Conflict-Based Novelty Tests.

    PubMed

    Kysil, Elana V; Meshalkina, Darya A; Frick, Erin E; Echevarria, David J; Rosemberg, Denis B; Maximino, Caio; Lima, Monica Gomes; Abreu, Murilo S; Giacomini, Ana C; Barcellos, Leonardo J G; Song, Cai; Kalueff, Allan V

    2017-06-01

    Modeling of stress and anxiety in adult zebrafish (Danio rerio) is increasingly utilized in neuroscience research and central nervous system (CNS) drug discovery. Representing the most commonly used zebrafish anxiety models, the novel tank test (NTT) focuses on zebrafish diving in response to potentially threatening stimuli, whereas the light-dark test (LDT) is based on fish scototaxis (innate preference for dark vs. bright areas). Here, we systematically evaluate the utility of these two tests, combining meta-analyses of published literature with comparative in vivo behavioral and whole-body endocrine (cortisol) testing. Overall, the NTT and LDT behaviors demonstrate a generally good cross-test correlation in vivo, whereas meta-analyses of published literature show that both tests have similar sensitivity to zebrafish anxiety-like states. Finally, NTT evokes higher levels of cortisol, likely representing a more stressful procedure than LDT. Collectively, our study reappraises NTT and LDT for studying anxiety-like states in zebrafish, and emphasizes their developing utility for neurobehavioral research. These findings can help optimize drug screening procedures by choosing more appropriate models for testing anxiolytic or anxiogenic drugs.

  8. Financial modelling of femtosecond laser-assisted cataract surgery within the National Health Service using a 'hub and spoke' model for the delivery of high-volume cataract surgery.

    PubMed

    Roberts, H W; Ni, M Z; O'Brart, D P S

    2017-03-16

    To develop financial models which offset additional costs associated with femtosecond laser (FL)-assisted cataract surgery (FLACS) against improvements in productivity and to determine important factors relating to its implementation into the National Health Service (NHS). FL platforms are expensive, in initial purchase and running costs. The additional costs associated with FL technology might be offset by an increase in surgical efficiency. Using a 'hub and spoke' model to provide high-volume cataract surgery, we designed a financial model, comparing FLACS against conventional phacoemulsification surgery (CPS). The model was populated with averaged financial data from 4 NHS foundation trusts and 4 commercial organisations manufacturing FL platforms. We tested our model with sensitivity and threshold analyses to allow for variations or uncertainties. The averaged weekly workload for cataract surgery using our hub and spoke model required either 8 or 5.4 theatre sessions with CPS or FLACS, respectively. Despite reduced theatre utilisation, CPS (average £433/case) was still found to be 8.7% cheaper than FLACS (average £502/case). The greatest associated cost of FLACS was the patient interface (PI) (average £135/case). Sensitivity analyses demonstrated that FLACS could be less expensive than CPS, but only if increased efficiency, in terms of cataract procedures per theatre list, increased by over 100%, or if the cost of the PI was reduced by almost 70%. The financial viability of FLACS within the NHS is currently precluded by the cost of the PI and the lack of knowledge regarding any gains in operational efficiency. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  9. Cost-Effectiveness Analysis of Regorafenib for Metastatic Colorectal Cancer

    PubMed Central

    Goldstein, Daniel A.; Ahmad, Bilal B.; Chen, Qiushi; Ayer, Turgay; Howard, David H.; Lipscomb, Joseph; El-Rayes, Bassel F.; Flowers, Christopher R.

    2015-01-01

    Purpose Regorafenib is a standard-care option for treatment-refractory metastatic colorectal cancer that increases median overall survival by 6 weeks compared with placebo. Given this small incremental clinical benefit, we evaluated the cost-effectiveness of regorafenib in the third-line setting for patients with metastatic colorectal cancer from the US payer perspective. Methods We developed a Markov model to compare the cost and effectiveness of regorafenib with those of placebo in the third-line treatment of metastatic colorectal cancer. Health outcomes were measured in life-years and quality-adjusted life-years (QALYs). Drug costs were based on Medicare reimbursement rates in 2014. Model robustness was addressed in univariable and probabilistic sensitivity analyses. Results Regorafenib provided an additional 0.04 QALYs (0.13 life-years) at a cost of $40,000, resulting in an incremental cost-effectiveness ratio of $900,000 per QALY. The incremental cost-effectiveness ratio for regorafenib was > $550,000 per QALY in all of our univariable and probabilistic sensitivity analyses. Conclusion Regorafenib provides minimal incremental benefit at high incremental cost per QALY in the third-line management of metastatic colorectal cancer. The cost-effectiveness of regorafenib could be improved by the use of value-based pricing. PMID:26304904

  10. Scale Matters: A Cost-Outcome Analysis of an m-Health Intervention in Malawi.

    PubMed

    Larsen-Cooper, Erin; Bancroft, Emily; Rajagopal, Sharanya; O'Toole, Maggie; Levin, Ann

    2016-04-01

    The primary objectives of this study are to determine cost per user and cost per contact with users of a mobile health (m-health) intervention. The secondary objectives are to map costs to changes in maternal, newborn, and child health (MNCH) and to estimate costs of alternate implementation and usage scenarios. A base cost model, constructed from recurrent costs and selected capital costs, was used to estimate average cost per user and per contact of an m-health intervention. This model was mapped to statistically significant changes in MNCH intermediate outcomes to determine the cost of improvements in MNCH indicators. Sensitivity analyses were conducted to estimate costs in alternate scenarios. The m-health intervention cost $29.33 per user and $4.33 per successful contact. The average cost for each user experiencing a change in an MNCH indicator ranged from $67 to $355. The sensitivity analyses showed that cost per user could be reduced by 48% if the service were to operate at full capacity. We believe that the intervention, operating at scale, has potential to be a cost-effective method for improving maternal and child health indicators.

  11. Scale Matters: A Cost-Outcome Analysis of an m-Health Intervention in Malawi

    PubMed Central

    Bancroft, Emily; Rajagopal, Sharanya; O'Toole, Maggie; Levin, Ann

    2016-01-01

    Abstract Background: The primary objectives of this study are to determine cost per user and cost per contact with users of a mobile health (m-health) intervention. The secondary objectives are to map costs to changes in maternal, newborn, and child health (MNCH) and to estimate costs of alternate implementation and usage scenarios. Materials and Methods: A base cost model, constructed from recurrent costs and selected capital costs, was used to estimate average cost per user and per contact of an m-health intervention. This model was mapped to statistically significant changes in MNCH intermediate outcomes to determine the cost of improvements in MNCH indicators. Sensitivity analyses were conducted to estimate costs in alternate scenarios. Results: The m-health intervention cost $29.33 per user and $4.33 per successful contact. The average cost for each user experiencing a change in an MNCH indicator ranged from $67 to $355. The sensitivity analyses showed that cost per user could be reduced by 48% if the service were to operate at full capacity. Conclusions: We believe that the intervention, operating at scale, has potential to be a cost-effective method for improving maternal and child health indicators. PMID:26348994

  12. Modeling barrier island response to sea-level rise in the Outer Banks, North Carolina

    USGS Publications Warehouse

    Moore, Laura J.; List, Jeffrey H.; Williams, S. Jeffress; Stolper, David

    2007-01-01

    An 8500-year Holocene simulation developed in GEOMBEST provides a possible scenario to explain the evolution of barrier coast between Rodanthe and Cape Hatteras, NC. Sensitivity analyses suggest that in the Outer Banks, the rate of sea-level rise is the most important factor in determining how barrier islands evolve. The Holocene simulation provides a basis for future simulations, which suggest that if sea level rises up to 0.88 m by AD 2100, as predicted by the highest estimates of the Intergovernmental Panel on Climate Change, the barrier in the study area may migrate on the order of 2.5 times more rapidly than at present. If sea level rises beyond IPCC predictions to reach 1.4–1.9 m above modern sea level by AD 2100, model results suggest that barrier islands in the Outer Banks may become vulnerable to threshold collapse, disintegrating during storm events, by the end of the next century. Consistent with sensitivity analyses, additional simulations indicate that anthropogenic activities, such as increasing the rate of sediment supply through beach nourishment, will only slightly affect barrier island migration rates and barrier island vulnerability to collapse.

  13. Disclosure Control using Partially Synthetic Data for Large-Scale Health Surveys, with Applications to CanCORS

    PubMed Central

    Loong, Bronwyn; Zaslavsky, Alan M.; He, Yulei; Harrington, David P.

    2013-01-01

    Statistical agencies have begun to partially synthesize public-use data for major surveys to protect the confidentiality of respondents’ identities and sensitive attributes, by replacing high disclosure risk and sensitive variables with multiple imputations. To date, there are few applications of synthetic data techniques to large-scale healthcare survey data. Here, we describe partial synthesis of survey data collected by CanCORS, a comprehensive observational study of the experiences, treatments, and outcomes of patients with lung or colorectal cancer in the United States. We review inferential methods for partially synthetic data, and discuss selection of high disclosure risk variables for synthesis, specification of imputation models, and identification disclosure risk assessment. We evaluate data utility by replicating published analyses and comparing results using original and synthetic data, and discuss practical issues in preserving inferential conclusions. We found that important subgroup relationships must be included in the synthetic data imputation model, to preserve the data utility of the observed data for a given analysis procedure. We conclude that synthetic CanCORS data are suited best for preliminary data analyses purposes. These methods address the requirement to share data in clinical research without compromising confidentiality. PMID:23670983

  14. Validation of a portable nitric oxide analyzer for screening in primary ciliary dyskinesias.

    PubMed

    Harris, Amanda; Bhullar, Esther; Gove, Kerry; Joslin, Rhiannon; Pelling, Jennifer; Evans, Hazel J; Walker, Woolf T; Lucas, Jane S

    2014-02-10

    Nasal nitric oxide (nNO) levels are very low in primary ciliary dyskinesia (PCD) and it is used as a screening test. We assessed the reliability and usability of a hand-held analyser in comparison to a stationary nitric oxide (NO) analyser in 50 participants (15 healthy, 13 PCD, 22 other respiratory diseases; age 6-79 years). Nasal NO was measured using a stationary NO analyser during a breath-holding maneuver, and using a hand-held analyser during tidal breathing, sampling at 2 ml/sec or 5 ml/sec. The three methods were compared for their specificity and sensitivity as a screen for PCD, their success rate in different age groups, within subject repeatability and acceptability. Correlation between methods was assessed. Valid nNO measurements were obtained in 94% of participants using the stationary analyser, 96% using the hand-held analyser at 5 ml/sec and 76% at 2 ml/sec. The hand-held device at 5 ml/sec had excellent sensitivity and specificity as a screening test for PCD during tidal breathing (cut-off of 30 nL/min,100% sensitivity, >95% specificity). The cut-off using the stationary analyser during breath-hold was 38 nL/min (100% sensitivity, 95% specificity). The stationary and hand-held analyser (5 ml/sec) showed reasonable within-subject repeatability(% coefficient of variation = 15). The hand-held NO analyser provides a promising screening tool for PCD.

  15. Mink liver TEQs and reproductive NOAEL resulting from dietary exposure to fish from Saginaw Bay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tillitt, D.E.; Gale, R.W.; Peterman, P.H.

    1994-12-31

    Mink are known to be very sensitive to the toxic effects of planar halogenated hydrocarbons (PHHs). Previously the authors reported the reproductive effects in mink fed a diet containing 10, 20, or 40% fish taken from Saginaw Bay, Lake Huron. In this presentation the authors report the complete chemical analyses of the diets and the adult mink livers, along with a comparison of an additive model of toxicity with the results of the H411E bioassay on these samples. The H411E bioassay consistently estimated greater dioxin toxic-equivalents (TEQS) as compared to an additive model of toxicity and chemical analyses. TEQs derivedmore » from chemical analyses accounted for approximately 60% of the TEQs observed with the H411E bioassay. The difference is likely due to the presence of compounds which were not quantitated as opposed to synergistic interactions of the mixtures. Significant reproductive effects were observed in the lowest treatment group (10% fish, or 19.4 pg TEQs/g). The implications of these chemical and bioassay analyses on the calculation of a NOAEL will be discussed.« less

  16. Sensitivity and specificity considerations for fMRI encoding, decoding, and mapping of auditory cortex at ultra-high field.

    PubMed

    Moerel, Michelle; De Martino, Federico; Kemper, Valentin G; Schmitter, Sebastian; Vu, An T; Uğurbil, Kâmil; Formisano, Elia; Yacoub, Essa

    2018-01-01

    Following rapid technological advances, ultra-high field functional MRI (fMRI) enables exploring correlates of neuronal population activity at an increasing spatial resolution. However, as the fMRI blood-oxygenation-level-dependent (BOLD) contrast is a vascular signal, the spatial specificity of fMRI data is ultimately determined by the characteristics of the underlying vasculature. At 7T, fMRI measurement parameters determine the relative contribution of the macro- and microvasculature to the acquired signal. Here we investigate how these parameters affect relevant high-end fMRI analyses such as encoding, decoding, and submillimeter mapping of voxel preferences in the human auditory cortex. Specifically, we compare a T 2 * weighted fMRI dataset, obtained with 2D gradient echo (GE) EPI, to a predominantly T 2 weighted dataset obtained with 3D GRASE. We first investigated the decoding accuracy based on two encoding models that represented different hypotheses about auditory cortical processing. This encoding/decoding analysis profited from the large spatial coverage and sensitivity of the T 2 * weighted acquisitions, as evidenced by a significantly higher prediction accuracy in the GE-EPI dataset compared to the 3D GRASE dataset for both encoding models. The main disadvantage of the T 2 * weighted GE-EPI dataset for encoding/decoding analyses was that the prediction accuracy exhibited cortical depth dependent vascular biases. However, we propose that the comparison of prediction accuracy across the different encoding models may be used as a post processing technique to salvage the spatial interpretability of the GE-EPI cortical depth-dependent prediction accuracy. Second, we explored the mapping of voxel preferences. Large-scale maps of frequency preference (i.e., tonotopy) were similar across datasets, yet the GE-EPI dataset was preferable due to its larger spatial coverage and sensitivity. However, submillimeter tonotopy maps revealed biases in assigned frequency preference and selectivity for the GE-EPI dataset, but not for the 3D GRASE dataset. Thus, a T 2 weighted acquisition is recommended if high specificity in tonotopic maps is required. In conclusion, different fMRI acquisitions were better suited for different analyses. It is therefore critical that any sequence parameter optimization considers the eventual intended fMRI analyses and the nature of the neuroscience questions being asked. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Parameterization, sensitivity analysis, and inversion: an investigation using groundwater modeling of the surface-mined Tivoli-Guidonia basin (Metropolitan City of Rome, Italy)

    NASA Astrophysics Data System (ADS)

    La Vigna, Francesco; Hill, Mary C.; Rossetto, Rudy; Mazza, Roberto

    2016-09-01

    With respect to model parameterization and sensitivity analysis, this work uses a practical example to suggest that methods that start with simple models and use computationally frugal model analysis methods remain valuable in any toolbox of model development methods. In this work, groundwater model calibration starts with a simple parameterization that evolves into a moderately complex model. The model is developed for a water management study of the Tivoli-Guidonia basin (Rome, Italy) where surface mining has been conducted in conjunction with substantial dewatering. The approach to model development used in this work employs repeated analysis using sensitivity and inverse methods, including use of a new observation-stacked parameter importance graph. The methods are highly parallelizable and require few model runs, which make the repeated analyses and attendant insights possible. The success of a model development design can be measured by insights attained and demonstrated model accuracy relevant to predictions. Example insights were obtained: (1) A long-held belief that, except for a few distinct fractures, the travertine is homogeneous was found to be inadequate, and (2) The dewatering pumping rate is more critical to model accuracy than expected. The latter insight motivated additional data collection and improved pumpage estimates. Validation tests using three other recharge and pumpage conditions suggest good accuracy for the predictions considered. The model was used to evaluate management scenarios and showed that similar dewatering results could be achieved using 20 % less pumped water, but would require installing newly positioned wells and cooperation between mine owners.

  18. Atmospheric Modeling of Cool Giant and Supergiant Stars

    NASA Astrophysics Data System (ADS)

    Linsky, Jeffrey L.

    1984-07-01

    We propose to continue our collaborative program of obtaining and analysing high dispersion SWP spectra of cool stars. We request high dispersion, short wavelength IUE spectra of the stars alpha Tau (K5III), gamma Cru (M3III), epsilon Peg (K2Ib) and beta Cam (G0Ib) with exposure times of 16 hours or more. These spectra will provide measurements of line profiles, widths and Doppler shifts in addition to density-sensitive and opacity-sensitive line ratios. Models of chromospheric and transition region (where present) structure will be calculated by a combination of emission measure analysis, line opacity/probability of escape methods and model atmosphere calculations for optically thick resonance lines such as MgII h and k, including partial redistribution radiative transfer. These models will be used to investigate the atmospheric energy balance and the nature of energy transport and nonradiative energy deposition processes. The results will be considered in relation to stellar evolution and compared with the chromospheric properties of other stars previously studied by the authors and their collaborators.

  19. Identifying optimal threshold statistics for elimination of hookworm using a stochastic simulation model.

    PubMed

    Truscott, James E; Werkman, Marleen; Wright, James E; Farrell, Sam H; Sarkar, Rajiv; Ásbjörnsdóttir, Kristjana; Anderson, Roy M

    2017-06-30

    There is an increased focus on whether mass drug administration (MDA) programmes alone can interrupt the transmission of soil-transmitted helminths (STH). Mathematical models can be used to model these interventions and are increasingly being implemented to inform investigators about expected trial outcome and the choice of optimum study design. One key factor is the choice of threshold for detecting elimination. However, there are currently no thresholds defined for STH regarding breaking transmission. We develop a simulation of an elimination study, based on the DeWorm3 project, using an individual-based stochastic disease transmission model in conjunction with models of MDA, sampling, diagnostics and the construction of study clusters. The simulation is then used to analyse the relationship between the study end-point elimination threshold and whether elimination is achieved in the long term within the model. We analyse the quality of a range of statistics in terms of the positive predictive values (PPV) and how they depend on a range of covariates, including threshold values, baseline prevalence, measurement time point and how clusters are constructed. End-point infection prevalence performs well in discriminating between villages that achieve interruption of transmission and those that do not, although the quality of the threshold is sensitive to baseline prevalence and threshold value. Optimal post-treatment prevalence threshold value for determining elimination is in the range 2% or less when the baseline prevalence range is broad. For multiple clusters of communities, both the probability of elimination and the ability of thresholds to detect it are strongly dependent on the size of the cluster and the size distribution of the constituent communities. Number of communities in a cluster is a key indicator of probability of elimination and PPV. Extending the time, post-study endpoint, at which the threshold statistic is measured improves PPV value in discriminating between eliminating clusters and those that bounce back. The probability of elimination and PPV are very sensitive to baseline prevalence for individual communities. However, most studies and programmes are constructed on the basis of clusters. Since elimination occurs within smaller population sub-units, the construction of clusters introduces new sensitivities for elimination threshold values to cluster size and the underlying population structure. Study simulation offers an opportunity to investigate key sources of sensitivity for elimination studies and programme designs in advance and to tailor interventions to prevailing local or national conditions.

  20. Cost-Effectiveness Analysis of Preoperative Versus Postoperative Radiation Therapy in Extremity Soft Tissue Sarcoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qu, Xuanlu M.; Louie, Alexander V.; Ashman, Jonathan

    Purpose: Surgery combined with radiation therapy (RT) is the cornerstone of multidisciplinary management of extremity soft tissue sarcoma (STS). Although RT can be given in either the preoperative or the postoperative setting with similar local recurrence and survival outcomes, the side effect profiles, costs, and long-term functional outcomes are different. The aim of this study was to use decision analysis to determine optimal sequencing of RT with surgery in patients with extremity STS. Methods and Materials: A cost-effectiveness analysis was conducted using a state transition Markov model, with quality-adjusted life years (QALYs) as the primary outcome. A time horizon ofmore » 5 years, a cycle length of 3 months, and a willingness-to-pay threshold of $50,000/QALY was used. One-way deterministic sensitivity analyses were performed to determine the thresholds at which each strategy would be preferred. The robustness of the model was assessed by probabilistic sensitivity analysis. Results: Preoperative RT is a more cost-effective strategy ($26,633/3.00 QALYs) than postoperative RT ($28,028/2.86 QALYs) in our base case scenario. Preoperative RT is the superior strategy with either 3-dimensional conformal RT or intensity-modulated RT. One-way sensitivity analyses identified the relative risk of chronic adverse events as having the greatest influence on the preferred timing of RT. The likelihood of preoperative RT being the preferred strategy was 82% on probabilistic sensitivity analysis. Conclusions: Preoperative RT is more cost effective than postoperative RT in the management of resectable extremity STS, primarily because of the higher incidence of chronic adverse events with RT in the postoperative setting.« less

  1. The Genetic Basis for Variation in Sensitivity to Lead Toxicity in Drosophila melanogaster.

    PubMed

    Zhou, Shanshan; Morozova, Tatiana V; Hussain, Yasmeen N; Luoma, Sarah E; McCoy, Lenovia; Yamamoto, Akihiko; Mackay, Trudy F C; Anholt, Robert R H

    2016-07-01

    Lead toxicity presents a worldwide health problem, especially due to its adverse effects on cognitive development in children. However, identifying genes that give rise to individual variation in susceptibility to lead toxicity is challenging in human populations. Our goal was to use Drosophila melanogaster to identify evolutionarily conserved candidate genes associated with individual variation in susceptibility to lead exposure. To identify candidate genes associated with variation in susceptibility to lead toxicity, we measured effects of lead exposure on development time, viability and adult activity in the Drosophila melanogaster Genetic Reference Panel (DGRP) and performed genome-wide association analyses to identify candidate genes. We used mutants to assess functional causality of candidate genes and constructed a genetic network associated with variation in sensitivity to lead exposure, on which we could superimpose human orthologs. We found substantial heritabilities for all three traits and identified candidate genes associated with variation in susceptibility to lead exposure for each phenotype. The genetic architectures that determine variation in sensitivity to lead exposure are highly polygenic. Gene ontology and network analyses showed enrichment of genes associated with early development and function of the nervous system. Drosophila melanogaster presents an advantageous model to study the genetic underpinnings of variation in susceptibility to lead toxicity. Evolutionary conservation of cellular pathways that respond to toxic exposure allows predictions regarding orthologous genes and pathways across phyla. Thus, studies in the D. melanogaster model system can identify candidate susceptibility genes to guide subsequent studies in human populations. Zhou S, Morozova TV, Hussain YN, Luoma SE, McCoy L, Yamamoto A, Mackay TF, Anholt RR. 2016. The genetic basis for variation in sensitivity to lead toxicity in Drosophila melanogaster. Environ Health Perspect 124:1062-1070; http://dx.doi.org/10.1289/ehp.1510513.

  2. Blurring the Inputs: A Natural Language Approach to Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Kleb, William L.; Thompson, Richard A.; Johnston, Christopher O.

    2007-01-01

    To document model parameter uncertainties and to automate sensitivity analyses for numerical simulation codes, a natural-language-based method to specify tolerances has been developed. With this new method, uncertainties are expressed in a natural manner, i.e., as one would on an engineering drawing, namely, 5.25 +/- 0.01. This approach is robust and readily adapted to various application domains because it does not rely on parsing the particular structure of input file formats. Instead, tolerances of a standard format are added to existing fields within an input file. As a demonstration of the power of this simple, natural language approach, a Monte Carlo sensitivity analysis is performed for three disparate simulation codes: fluid dynamics (LAURA), radiation (HARA), and ablation (FIAT). Effort required to harness each code for sensitivity analysis was recorded to demonstrate the generality and flexibility of this new approach.

  3. Sensitivity to change of youth treatment outcome measures: a comparison of the CBCL, BASC-2, and Y-OQ.

    PubMed

    McClendon, Debra T; Warren, Jared S; Green, Katherine M; Burlingame, Gary M; Eggett, Dennis L; McClendon, Richard J

    2011-01-01

    This study evaluated the relative sensitivity to change of the Child Behavior Checklist/6-18 (CBCL), the Behavior Assessment System for Children-2 (BASC-2), and the Youth Outcome Questionnaire 2.01 (Y-OQ). Participants were 134 parents and 44 adolescents receiving routine outpatient services in a community mental health system. Hierarchical linear modeling analyses were used to examine change trajectories for the 3 measures across 3 groups: parent informants, parent and adolescent dyads, and adolescent informants. Results indicated that for parent-report measures, the Y-OQ was most change sensitive; the BASC-2 and CBCL were not statistically different from each other. Significant differences in change sensitivity were not observed for youth self-report of symptoms. Results suggest that the Y-OQ may be particularly useful for evaluating change in overall psychosocial functioning in children and adolescents. © 2010 Wiley Periodicals, Inc.

  4. Machine Learning Techniques for Prediction of Early Childhood Obesity.

    PubMed

    Dugan, T M; Mukhopadhyay, S; Carroll, A; Downs, S

    2015-01-01

    This paper aims to predict childhood obesity after age two, using only data collected prior to the second birthday by a clinical decision support system called CHICA. Analyses of six different machine learning methods: RandomTree, RandomForest, J48, ID3, Naïve Bayes, and Bayes trained on CHICA data show that an accurate, sensitive model can be created. Of the methods analyzed, the ID3 model trained on the CHICA dataset proved the best overall performance with accuracy of 85% and sensitivity of 89%. Additionally, the ID3 model had a positive predictive value of 84% and a negative predictive value of 88%. The structure of the tree also gives insight into the strongest predictors of future obesity in children. Many of the strongest predictors seen in the ID3 modeling of the CHICA dataset have been independently validated in the literature as correlated with obesity, thereby supporting the validity of the model. This study demonstrated that data from a production clinical decision support system can be used to build an accurate machine learning model to predict obesity in children after age two.

  5. Viability of piping plover Charadrius melodus metapopulations

    USGS Publications Warehouse

    Plissner, Jonathan H.; Haig, Susan M.

    2000-01-01

    The metapopulation viability analysis package, VORTEX, was used to examine viability and recovery objectives for piping plovers Charadrius melodus, an endangered shorebird that breeds in three distinct regions of North America. Baseline models indicate that while Atlantic Coast populations, under current management practices, are at little risk of near-term extinction, Great Plains and Great Lakes populations require 36% higher mean fecundity for a significant probability of persisting for the next 100 years. Metapopulation structure (i.e. the delineation of populations within the metapopulation) and interpopulation dispersal rates had varying effects on model results; however, spatially-structured metapopulations exhibited lower viability than that reported for single-population models. The models were most sensitive to variation in survivorship; hence, additional mortality data will improve their accuracy. With this information, such models become useful tools in identifying successful management objectives; and sensitivity analyses, even in the absence of some data, may indicate which options are likely to be most effective. Metapopulation viability models are best suited for developing conservation strategies for achieving recovery objectives based on maintaining an externally derived, target population size and structure.

  6. Monthly hydroclimatology of the continental United States

    NASA Astrophysics Data System (ADS)

    Petersen, Thomas; Devineni, Naresh; Sankarasubramanian, A.

    2018-04-01

    Physical/semi-empirical models that do not require any calibration are of paramount need for estimating hydrological fluxes for ungauged sites. We develop semi-empirical models for estimating the mean and variance of the monthly streamflow based on Taylor Series approximation of a lumped physically based water balance model. The proposed models require mean and variance of monthly precipitation and potential evapotranspiration, co-variability of precipitation and potential evapotranspiration and regionally calibrated catchment retention sensitivity, atmospheric moisture uptake sensitivity, groundwater-partitioning factor, and the maximum soil moisture holding capacity parameters. Estimates of mean and variance of monthly streamflow using the semi-empirical equations are compared with the observed estimates for 1373 catchments in the continental United States. Analyses show that the proposed models explain the spatial variability in monthly moments for basins in lower elevations. A regionalization of parameters for each water resources region show good agreement between observed moments and model estimated moments during January, February, March and April for mean and all months except May and June for variance. Thus, the proposed relationships could be employed for understanding and estimating the monthly hydroclimatology of ungauged basins using regional parameters.

  7. Trimming a hazard logic tree with a new model-order-reduction technique

    USGS Publications Warehouse

    Porter, Keith; Field, Edward; Milner, Kevin R

    2017-01-01

    The size of the logic tree within the Uniform California Earthquake Rupture Forecast Version 3, Time-Dependent (UCERF3-TD) model can challenge risk analyses of large portfolios. An insurer or catastrophe risk modeler concerned with losses to a California portfolio might have to evaluate a portfolio 57,600 times to estimate risk in light of the hazard possibility space. Which branches of the logic tree matter most, and which can one ignore? We employed two model-order-reduction techniques to simplify the model. We sought a subset of parameters that must vary, and the specific fixed values for the remaining parameters, to produce approximately the same loss distribution as the original model. The techniques are (1) a tornado-diagram approach we employed previously for UCERF2, and (2) an apparently novel probabilistic sensitivity approach that seems better suited to functions of nominal random variables. The new approach produces a reduced-order model with only 60 of the original 57,600 leaves. One can use the results to reduce computational effort in loss analyses by orders of magnitude.

  8. An adherence based cost-consequence model comparing bimatoprost 0.01% to bimatoprost 0.03%.

    PubMed

    Wong, William B; Patel, Vaishali D; Kowalski, Jonathan W; Schwartz, Gail

    2013-09-01

    Estimate the long-term direct medical costs and clinical consequences of improved adherence with bimatoprost 0.01% compared to bimatoprost 0.03% in the treatment of glaucoma. A cost-consequence model was constructed from the perspective of a US healthcare payer. The model structure included three adherence levels (high, moderate, low) and four mean deviation (MD) defined health states (mild, moderate, severe glaucoma, blindness) for each adherence level. Clinical efficacy in terms of IOP reduction was obtained from the randomized controlled trial comparing bimatoprost 0.01% with bimatoprost 0.03%. Medication adherence was based on observed 12 month rates from an analysis of a nationally representative pharmacy claims database. Patients with high, moderate and low adherence were assumed to receive 100%, 50% and 0% of the IOP reduction observed in the clinical trial, respectively. Each 1 mmHg reduction in IOP was assumed to result in a 10% reduction in the risk of glaucoma progression. Worse glaucoma severity health states were associated with higher medical resource costs. Outcome measures were total costs, proportion of patients who progress and who become blind, and years of blindness. Deterministic sensitivity analyses were performed on uncertain model parameters. The percentage of patients progressing, becoming blind, and the time spent blind slightly favored bimatoprost 0.01%. Improved adherence with bimatoprost 0.01% led to higher costs in the first 2 years; however, starting in year 3 bimatoprost 0.01% became less costly compared to bimatoprost 0.03% with a total reduction in costs reaching US$3433 over a lifetime time horizon. Deterministic sensitivity analyses demonstrated that results were robust, with the majority of analyses favoring bimatoprost 0.01%. Application of 1 year adherence and efficacy over the long term are limitations. Modeling the effect of greater medication adherence with bimatoprost 0.01% compared with bimatoprost 0.03% suggests that differences may result in improved economic and patient outcomes.

  9. Cost-effectiveness of natalizumab vs fingolimod for the treatment of relapsing-remitting multiple sclerosis: analyses in Sweden.

    PubMed

    O'Day, Ken; Meyer, Kellie; Stafkey-Mailey, Dana; Watson, Crystal

    2015-04-01

    To assess the cost-effectiveness of natalizumab vs fingolimod over 2 years in relapsing-remitting multiple sclerosis (RRMS) patients and patients with rapidly evolving severe disease in Sweden. A decision analytic model was developed to estimate the incremental cost per relapse avoided of natalizumab and fingolimod from the perspective of the Swedish healthcare system. Modeled 2-year costs in Swedish kronor of treating RRMS patients included drug acquisition costs, administration and monitoring costs, and costs of treating MS relapses. Effectiveness was measured in terms of MS relapses avoided using data from the AFFIRM and FREEDOMS trials for all patients with RRMS and from post-hoc sub-group analyses for patients with rapidly evolving severe disease. Probabilistic sensitivity analyses were conducted to assess uncertainty. The analysis showed that, in all patients with MS, treatment with fingolimod costs less (440,463 Kr vs 444,324 Kr), but treatment with natalizumab results in more relapses avoided (0.74 vs 0.59), resulting in an incremental cost-effectiveness ratio (ICER) of 25,448 Kr per relapse avoided. In patients with rapidly evolving severe disease, natalizumab dominated fingolimod. Results of the sensitivity analysis demonstrate the robustness of the model results. At a willingness-to-pay (WTP) threshold of 500,000 Kr per relapse avoided, natalizumab is cost-effective in >80% of simulations in both patient populations. Limitations include absence of data from direct head-to-head studies comparing natalizumab and fingolimod, use of relapse rate reduction rather than sustained disability progression as the primary model outcome, assumption of 100% adherence to MS treatment, and exclusion of adverse event costs in the model. Natalizumab remains a cost-effective treatment option for patients with MS in Sweden. In the RRMS patient population, the incremental cost per relapse avoided is well below a 500,000 Kr WTP threshold per relapse avoided. In the rapidly evolving severe disease patient population, natalizumab dominates fingolimod.

  10. A model of BIS/BAS sensitivity, emotion regulation difficulties, and depression, anxiety, and stress symptoms in relation to sleep quality.

    PubMed

    Markarian, Shaunt A; Pickett, Scott M; Deveson, Danielle F; Kanona, Brenda B

    2013-11-30

    Recent research has indicated that interactions between behavioral inhibition system (BIS)/behavioral activation system (BAS) sensitivity and emotion regulation (ER) difficulties increases risk for psychopathology. Considering sleep quality (SQ) has been linked to emotion regulation difficulties (ERD) and psychopathology, further investigation of a possible mechanism is needed. The current study examined associations between BIS/BAS sensitivity, ERD, and SQ to depression, anxiety, and stress symptoms in an undergraduate sample (n=459). Positive relationships between BIS sensitivity and both ERD and stress symptoms, and negative relationships between BAS-reward sensitivity and both ERD and depression symptoms were observed. Furthermore, ERD were positively related to depression, anxiety, and stress symptoms. Succeeding analyses revealed differential relationships between ERD and depression, anxiety, and stress symptoms among good quality and poor quality sleepers. The findings are discussed within the context of personality dimensions and self-regulatory mechanisms, along with implications for the treatment of depression, anxiety and sleep difficulties. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  11. Cost-Utility Analysis of Telemonitoring Interventions for Patients with Chronic Obstructive Pulmonary Disease (COPD) in Germany.

    PubMed

    Hofer, Florian; Achelrod, Dmitrij; Stargardt, Tom

    2016-12-01

    Chronic obstructive pulmonary disease (COPD) poses major challenges for health care systems. Previous studies suggest that telemonitoring could be effective in preventing hospitalisations and hence reduce costs. The aim was to evaluate whether telemonitoring interventions for COPD are cost-effective from the perspective of German statutory sickness funds. A cost-utility analysis was conducted using a combination of a Markov model and a decision tree. Telemonitoring as add-on to standard treatment was compared with standard treatment alone. The model consisted of four transition stages to account for COPD severity, and a terminal stage for death. Within each cycle, the frequency of exacerbations as well as outcomes for 2015 costs and quality adjusted life years (QALYs) for each stage were calculated. Values for input parameters were taken from the literature. Deterministic and probabilistic sensitivity analyses were conducted. In the base case, telemonitoring led to an increase in incremental costs (€866 per patient) but also in incremental QALYs (0.05 per patient). The incremental cost-effectiveness ratio (ICER) was thus €17,410 per QALY gained. A deterministic sensitivity analysis showed that hospitalisation rate and costs for telemonitoring equipment greatly affected results. The probabilistic ICER averaged €34,432 per QALY (95 % confidence interval 12,161-56,703). We provide evidence that telemonitoring may be cost-effective in Germany from a payer's point of view. This holds even after deterministic and probabilistic sensitivity analyses.

  12. Cost-Effectiveness Analysis of Probiotic Use to Prevent Clostridium difficile Infection in Hospitalized Adults Receiving Antibiotics.

    PubMed

    Shen, Nicole T; Leff, Jared A; Schneider, Yecheskel; Crawford, Carl V; Maw, Anna; Bosworth, Brian; Simon, Matthew S

    2017-01-01

    Systematic reviews with meta-analyses and meta-regression suggest that timely probiotic use can prevent Clostridium difficile infection (CDI) in hospitalized adults receiving antibiotics, but the cost effectiveness is unknown. We sought to evaluate the cost effectiveness of probiotic use for prevention of CDI versus no probiotic use in the United States. We programmed a decision analytic model using published literature and national databases with a 1-year time horizon. The base case was modeled as a hypothetical cohort of hospitalized adults (mean age 68) receiving antibiotics with and without concurrent probiotic administration. Projected outcomes included quality-adjusted life-years (QALYs), costs (2013 US dollars), incremental cost-effectiveness ratios (ICERs; $/QALY), and cost per infection avoided. One-way, two-way, and probabilistic sensitivity analyses were conducted, and scenarios of different age cohorts were considered. The ICERs less than $100000 per QALY were considered cost effective. Probiotic use dominated (more effective and less costly) no probiotic use. Results were sensitive to probiotic efficacy (relative risk <0.73), the baseline risk of CDI (>1.6%), the risk of probiotic-associated bactermia/fungemia (<0.26%), probiotic cost (<$130), and age (>65). In probabilistic sensitivity analysis, at a willingness-to-pay threshold of $100000/QALY, probiotics were the optimal strategy in 69.4% of simulations. Our findings suggest that probiotic use may be a cost-effective strategy to prevent CDI in hospitalized adults receiving antibiotics age 65 or older or when the baseline risk of CDI exceeds 1.6%.

  13. Less is more: cost-effectiveness analysis of surveillance strategies for small, nonfunctional, radiographically benign adrenal incidentalomas.

    PubMed

    Chomsky-Higgins, Kathryn; Seib, Carolyn; Rochefort, Holly; Gosnell, Jessica; Shen, Wen T; Kahn, James G; Duh, Quan-Yang; Suh, Insoo

    2018-01-01

    Guidelines for management of small adrenal incidentalomas are mutually inconsistent. No cost-effectiveness analysis has been performed to evaluate rigorously the relative merits of these strategies. We constructed a decision-analytic model to evaluate surveillance strategies for <4cm, nonfunctional, benign-appearing adrenal incidentalomas. We evaluated 4 surveillance strategies: none, one-time, annual for 2 years, and annual for 5 years. Threshold and sensitivity analyses assessed robustness of the model. Costs were represented in 2016 US dollars and health outcomes in quality-adjusted life-years. No surveillance has an expected net cost of $262 and 26.22 quality-adjusted life-years. One-time surveillance costs $158 more and adds 0.2 quality-adjusted life-years for an incremental cost-effectiveness ratio of $778/quality-adjusted life-years. The strategies involving more surveillance were dominated by the no surveillance and one-time surveillance strategies less effective and more expensive. Above a 0.7% prevalence of adrenocortical carcinoma, one-time surveillance was the most effective strategy. The results were robust to all sensitivity analyses of disease prevalence, sensitivity, and specificity of diagnostic assays and imaging as well as health state utility. For patients with a < 4cm, nonfunctional, benign-appearing mass, one-time follow-up evaluation involving a noncontrast computed tomography and biochemical evaluation is cost-effective. Strategies requiring more surveillance accrue more cost without incremental benefit. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Solution accuracies of finite element reentry heat transfer and thermal stress analyses of Space Shuttle Orbiter

    NASA Technical Reports Server (NTRS)

    Ko, William L.

    1988-01-01

    Accuracies of solutions (structural temperatures and thermal stresses) obtained from different thermal and structural FEMs set up for the Space Shuttle Orbiter (SSO) are compared and discussed. For studying the effect of element size on the solution accuracies of heat-transfer and thermal-stress analyses of the SSO, five SPAR thermal models and five NASTRAN structural models were set up for wing midspan bay 3. The structural temperature distribution over the wing skin (lower and upper) surface of one bay was dome shaped and induced more severe thermal stresses in the chordwise direction than in the spanwise direction. The induced thermal stresses were extremely sensitive to slight variation in structural temperature distributions. Both internal convention and internal radiation were found to have equal effects on the SSO.

  15. Characterizing uncertainty and variability in physiologically based pharmacokinetic models: state of the science and needs for research and implementation.

    PubMed

    Barton, Hugh A; Chiu, Weihsueh A; Setzer, R Woodrow; Andersen, Melvin E; Bailer, A John; Bois, Frédéric Y; Dewoskin, Robert S; Hays, Sean; Johanson, Gunnar; Jones, Nancy; Loizou, George; Macphail, Robert C; Portier, Christopher J; Spendiff, Martin; Tan, Yu-Mei

    2007-10-01

    Physiologically based pharmacokinetic (PBPK) models are used in mode-of-action based risk and safety assessments to estimate internal dosimetry in animals and humans. When used in risk assessment, these models can provide a basis for extrapolating between species, doses, and exposure routes or for justifying nondefault values for uncertainty factors. Characterization of uncertainty and variability is increasingly recognized as important for risk assessment; this represents a continuing challenge for both PBPK modelers and users. Current practices show significant progress in specifying deterministic biological models and nondeterministic (often statistical) models, estimating parameters using diverse data sets from multiple sources, using them to make predictions, and characterizing uncertainty and variability of model parameters and predictions. The International Workshop on Uncertainty and Variability in PBPK Models, held 31 Oct-2 Nov 2006, identified the state-of-the-science, needed changes in practice and implementation, and research priorities. For the short term, these include (1) multidisciplinary teams to integrate deterministic and nondeterministic/statistical models; (2) broader use of sensitivity analyses, including for structural and global (rather than local) parameter changes; and (3) enhanced transparency and reproducibility through improved documentation of model structure(s), parameter values, sensitivity and other analyses, and supporting, discrepant, or excluded data. Longer-term needs include (1) theoretical and practical methodological improvements for nondeterministic/statistical modeling; (2) better methods for evaluating alternative model structures; (3) peer-reviewed databases of parameters and covariates, and their distributions; (4) expanded coverage of PBPK models across chemicals with different properties; and (5) training and reference materials, such as cases studies, bibliographies/glossaries, model repositories, and enhanced software. The multidisciplinary dialogue initiated by this Workshop will foster the collaboration, research, data collection, and training necessary to make characterizing uncertainty and variability a standard practice in PBPK modeling and risk assessment.

  16. Cost-effectiveness of continuation maintenance pemetrexed after cisplatin and pemetrexed chemotherapy for advanced nonsquamous non-small-cell lung cancer: estimates from the perspective of the Chinese health care system.

    PubMed

    Zeng, Xiaohui; Peng, Liubao; Li, Jianhe; Chen, Gannong; Tan, Chongqing; Wang, Siying; Wan, Xiaomin; Ouyang, Lihui; Zhao, Ziying

    2013-01-01

    Continuation maintenance treatment with pemetrexed is approved by current clinical guidelines as a category 2A recommendation after induction therapy with cisplatin and pemetrexed chemotherapy (CP strategy) for patients with advanced nonsquamous non-small-cell lung cancer (NSCLC). However, the cost-effectiveness of the treatment remains unclear. We completed a trial-based assessment, from the perspective of the Chinese health care system, of the cost-effectiveness of maintenance pemetrexed treatment after a CP strategy for patients with advanced nonsquamous NSCLC. A Markov model was developed to estimate costs and benefits. It was based on a clinical trial that compared continuation maintenance pemetrexed therapy plus best supportive care (BSC) versus placebo plus BSC after a CP strategy for advanced nonsquamous NSCLC. Sensitivity analyses were conducted to assess the stability of the model. The model base case analysis suggested that continuation maintenance pemetrexed therapy after a CP strategy would increase benefits in a 1-, 2-, 5-, or 10-year time horizon, with incremental costs of $183,589.06, $126,353.16, $124,766.68, and $124,793.12 per quality-adjusted life-year gained, respectively. The most sensitive influential variable in the cost-effectiveness analysis was the utility of the progression-free survival state, followed by proportion of patients with postdiscontinuation therapy in both arms, proportion of BSC costs for PFS versus progressed survival state, and cost of pemetrexed. Probabilistic sensitivity analysis indicated that the cost-effective probability of adding continuation maintenance pemetrexed therapy to BSC was zero. One-way and probabilistic sensitivity analyses revealed that the Markov model was robust. Continuation maintenance of pemetrexed after a CP strategy for patients with advanced nonsquamous NSCLC is not cost-effective based on a recent clinical trial. Decreasing the price or adjusting the dosage of pemetrexed may be a better option for meeting the treatment demands of Chinese patients. Copyright © 2013 Elsevier HS Journals, Inc. All rights reserved.

  17. Fracture-Based Mesh Size Requirements for Matrix Cracks in Continuum Damage Mechanics Models

    NASA Technical Reports Server (NTRS)

    Leone, Frank A.; Davila, Carlos G.; Mabson, Gerald E.; Ramnath, Madhavadas; Hyder, Imran

    2017-01-01

    This paper evaluates the ability of progressive damage analysis (PDA) finite element (FE) models to predict transverse matrix cracks in unidirectional composites. The results of the analyses are compared to closed-form linear elastic fracture mechanics (LEFM) solutions. Matrix cracks in fiber-reinforced composite materials subjected to mode I and mode II loading are studied using continuum damage mechanics and zero-thickness cohesive zone modeling approaches. The FE models used in this study are built parametrically so as to investigate several model input variables and the limits associated with matching the upper-bound LEFM solutions. Specifically, the sensitivity of the PDA FE model results to changes in strength and element size are investigated.

  18. Cost-Effectiveness Analysis of Stereotactic Body Radiation Therapy Compared With Radiofrequency Ablation for Inoperable Colorectal Liver Metastases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Hayeon, E-mail: kimh2@upmc.edu; Gill, Beant; Beriwal, Sushil

    Purpose: To conduct a cost-effectiveness analysis to determine whether stereotactic body radiation therapy (SBRT) is a cost-effective therapy compared with radiofrequency ablation (RFA) for patients with unresectable colorectal cancer (CRC) liver metastases. Methods and Materials: A cost-effectiveness analysis was conducted using a Markov model and 1-month cycle over a lifetime horizon. Transition probabilities, quality of life utilities, and costs associated with SBRT and RFA were captured in the model on the basis of a comprehensive literature review and Medicare reimbursements in 2014. Strategies were compared using the incremental cost-effectiveness ratio, with effectiveness measured in quality-adjusted life years (QALYs). To account formore » model uncertainty, 1-way and probabilistic sensitivity analyses were performed. Strategies were evaluated with a willingness-to-pay threshold of $100,000 per QALY gained. Results: In base case analysis, treatment costs for 3 fractions of SBRT and 1 RFA procedure were $13,000 and $4397, respectively. Median survival was assumed the same for both strategies (25 months). The SBRT costs $8202 more than RFA while gaining 0.05 QALYs, resulting in an incremental cost-effectiveness ratio of $164,660 per QALY gained. In 1-way sensitivity analyses, results were most sensitive to variation of median survival from both treatments. Stereotactic body radiation therapy was economically reasonable if better survival was presumed (>1 month gain) or if used for large tumors (>4 cm). Conclusions: If equal survival is assumed, SBRT is not cost-effective compared with RFA for inoperable colorectal liver metastases. However, if better local control leads to small survival gains with SBRT, this strategy becomes cost-effective. Ideally, these results should be confirmed with prospective comparative data.« less

  19. Cost-effectiveness of a new urinary biomarker-based risk score compared to standard of care in prostate cancer diagnostics - a decision analytical model.

    PubMed

    Dijkstra, Siebren; Govers, Tim M; Hendriks, Rianne J; Schalken, Jack A; Van Criekinge, Wim; Van Neste, Leander; Grutters, Janneke P C; Sedelaar, John P Michiel; van Oort, Inge M

    2017-11-01

    To assess the cost-effectiveness of a new urinary biomarker-based risk score (SelectMDx; MDxHealth, Inc., Irvine, CA, USA) to identify patients for transrectal ultrasonography (TRUS)-guided biopsy and to compare this with the current standard of care (SOC), using only prostate-specific antigen (PSA) to select for TRUS-guided biopsy. A decision tree and Markov model were developed to evaluate the cost-effectiveness of SelectMDx as a reflex test vs SOC in men with a PSA level of >3 ng/mL. Transition probabilities, utilities and costs were derived from the literature and expert opinion. Cost-effectiveness was expressed in quality-adjusted life years (QALYs) and healthcare costs of both diagnostic strategies, simulating the course of patients over a time horizon representing 18 years. Deterministic sensitivity analyses were performed to address uncertainty in assumptions. A diagnostic strategy including SelectMDx with a cut-off chosen at a sensitivity of 95.7% for high-grade prostate cancer resulted in savings of €128 and a gain of 0.025 QALY per patient compared to the SOC strategy. The sensitivity analyses showed that the disutility assigned to active surveillance had a high impact on the QALYs gained and the disutility attributed to TRUS-guided biopsy only slightly influenced the outcome of the model. Based on the currently available evidence, the reduction of over diagnosis and overtreatment due to the use of the SelectMDx test in men with PSA levels of >3 ng/mL may lead to a reduction in total costs per patient and a gain in QALYs. © 2017 The Authors BJU International © 2017 BJU International Published by John Wiley & Sons Ltd.

  20. Cost-effectiveness analysis of neurocognitive-sparing treatments for brain metastases.

    PubMed

    Savitz, Samuel T; Chen, Ronald C; Sher, David J

    2015-12-01

    Decisions regarding how to treat patients who have 1 to 3 brain metastases require important tradeoffs between controlling recurrences, side effects, and costs. In this analysis, the authors compared novel treatments versus usual care to determine the incremental cost-effectiveness ratio from a payer's (Medicare) perspective. Cost-effectiveness was evaluated using a microsimulation of a Markov model for 60 one-month cycles. The model used 4 simulated cohorts of patients aged 65 years with 1 to 3 brain metastases. The 4 cohorts had a median survival of 3, 6, 12, and 24 months to test the sensitivity of the model to different prognoses. The treatment alternatives evaluated included stereotactic radiosurgery (SRS) with 3 variants of salvage after recurrence (whole-brain radiotherapy [WBRT], hippocampal avoidance WBRT [HA-WBRT], SRS plus WBRT, and SRS plus HA-WBRT). The findings were tested for robustness using probabilistic and deterministic sensitivity analyses. Traditional radiation therapies remained cost-effective for patients in the 3-month and 6-month cohorts. In the cohorts with longer median survival, HA-WBRT and SRS plus HA-WBRT became cost-effective relative to traditional treatments. When the treatments that involved HA-WBRT were excluded, either SRS alone or SRS plus WBRT was cost-effective relative to WBRT alone. The deterministic and probabilistic sensitivity analyses confirmed the robustness of these results. HA-WBRT and SRS plus HA-WBRT were cost-effective for 2 of the 4 cohorts, demonstrating the value of controlling late brain toxicity with this novel therapy. Cost-effectiveness depended on patient life expectancy. SRS was cost-effective in the cohorts with short prognoses (3 and 6 months), whereas HA-WBRT and SRS plus HA-WBRT were cost-effective in the cohorts with longer prognoses (12 and 24 months). © 2015 American Cancer Society.

  1. Costs of trastuzumab in combination with chemotherapy for HER2-positive advanced gastric or gastroesophageal junction cancer: an economic evaluation in the Chinese context.

    PubMed

    Wu, Bin; Ye, Ming; Chen, Huafeng; Shen, Jinfang F

    2012-02-01

    Adding trastuzumab to a conventional regimen of chemotherapy can improve survival in patients with human epidermal growth factor receptor 2 (HER2)-positive advanced gastric or gastroesophageal junction (GEJ) cancer, but the economic impact of this practice is unknown. The purpose of this cost-effectiveness analysis was to estimate the effects of adding trastuzumab to standard chemotherapy in patients with HER2-positive advanced gastric or GEJ cancer on health and economic outcomes in China. A Markov model was developed to simulate the clinical course of typical patients with HER2-positive advanced gastric or GEJ cancer. Five-year quality-adjusted life-years (QALYs), costs, and incremental cost-effectiveness ratios (ICERs) were estimated. Model inputs were derived from the published literature and government sources. Direct costs were estimated from the perspective of Chinese society. One-way and probabilistic sensitivity analyses were conducted. On baseline analysis, the addition of trastuzumab increased cost and QALY by $56,004.30 (year-2010 US $) and 0.18, respectively, relative to conventional chemotherapy, resulting in an ICER of $251,667.10/QALY gained. Probabilistic sensitivity analyses supported that the addition of trastuzumab was not cost-effective. Budgetary impact analysis estimated that the annual increase in fiscal expenditures would be ~$1 billion. On univariate sensitivity analysis, the median overall survival time for conventional chemotherapy was the most influential factor with respect to the robustness of the model. The findings from the present analysis suggest that the addition of trastuzumab to conventional chemotherapy might not be cost-effective in patients with HER2-positive advanced gastric or GEJ cancer. Copyright © 2012 Elsevier HS Journals, Inc. All rights reserved.

  2. A general method for handling missing binary outcome data in randomized controlled trials.

    PubMed

    Jackson, Dan; White, Ian R; Mason, Dan; Sutton, Stephen

    2014-12-01

    The analysis of randomized controlled trials with incomplete binary outcome data is challenging. We develop a general method for exploring the impact of missing data in such trials, with a focus on abstinence outcomes. We propose a sensitivity analysis where standard analyses, which could include 'missing = smoking' and 'last observation carried forward', are embedded in a wider class of models. We apply our general method to data from two smoking cessation trials. A total of 489 and 1758 participants from two smoking cessation trials. The abstinence outcomes were obtained using telephone interviews. The estimated intervention effects from both trials depend on the sensitivity parameters used. The findings differ considerably in magnitude and statistical significance under quite extreme assumptions about the missing data, but are reasonably consistent under more moderate assumptions. A new method for undertaking sensitivity analyses when handling missing data in trials with binary outcomes allows a wide range of assumptions about the missing data to be assessed. In two smoking cessation trials the results were insensitive to all but extreme assumptions. © 2014 The Authors. Addiction published by John Wiley & Sons Ltd on behalf of Society for the Study of Addiction.

  3. Structural optimization: Status and promise

    NASA Astrophysics Data System (ADS)

    Kamat, Manohar P.

    Chapters contained in this book include fundamental concepts of optimum design, mathematical programming methods for constrained optimization, function approximations, approximate reanalysis methods, dual mathematical programming methods for constrained optimization, a generalized optimality criteria method, and a tutorial and survey of multicriteria optimization in engineering. Also included are chapters on the compromise decision support problem and the adaptive linear programming algorithm, sensitivity analyses of discrete and distributed systems, the design sensitivity analysis of nonlinear structures, optimization by decomposition, mixed elements in shape sensitivity analysis of structures based on local criteria, and optimization of stiffened cylindrical shells subjected to destabilizing loads. Other chapters are on applications to fixed-wing aircraft and spacecraft, integrated optimum structural and control design, modeling concurrency in the design of composite structures, and tools for structural optimization. (No individual items are abstracted in this volume)

  4. The rise of multiple imputation: a review of the reporting and implementation of the method in medical research.

    PubMed

    Hayati Rezvan, Panteha; Lee, Katherine J; Simpson, Julie A

    2015-04-07

    Missing data are common in medical research, which can lead to a loss in statistical power and potentially biased results if not handled appropriately. Multiple imputation (MI) is a statistical method, widely adopted in practice, for dealing with missing data. Many academic journals now emphasise the importance of reporting information regarding missing data and proposed guidelines for documenting the application of MI have been published. This review evaluated the reporting of missing data, the application of MI including the details provided regarding the imputation model, and the frequency of sensitivity analyses within the MI framework in medical research articles. A systematic review of articles published in the Lancet and New England Journal of Medicine between January 2008 and December 2013 in which MI was implemented was carried out. We identified 103 papers that used MI, with the number of papers increasing from 11 in 2008 to 26 in 2013. Nearly half of the papers specified the proportion of complete cases or the proportion with missing data by each variable. In the majority of the articles (86%) the imputed variables were specified. Of the 38 papers (37%) that stated the method of imputation, 20 used chained equations, 8 used multivariate normal imputation, and 10 used alternative methods. Very few articles (9%) detailed how they handled non-normally distributed variables during imputation. Thirty-nine papers (38%) stated the variables included in the imputation model. Less than half of the papers (46%) reported the number of imputations, and only two papers compared the distribution of imputed and observed data. Sixty-six papers presented the results from MI as a secondary analysis. Only three articles carried out a sensitivity analysis following MI to assess departures from the missing at random assumption, with details of the sensitivity analyses only provided by one article. This review outlined deficiencies in the documenting of missing data and the details provided about imputation. Furthermore, only a few articles performed sensitivity analyses following MI even though this is strongly recommended in guidelines. Authors are encouraged to follow the available guidelines and provide information on missing data and the imputation process.

  5. Evolution of Indian land surface biases in the seasonal hindcasts from the Met Office Global Seasonal Forecasting System GloSea5

    NASA Astrophysics Data System (ADS)

    Chevuturi, Amulya; Turner, Andrew G.; Woolnoug, Steve J.; Martin, Gill

    2017-04-01

    In this study we investigate the development of biases over the Indian region in summer hindcasts of the UK Met Office coupled initialised global seasonal forecasting system, GloSea5-GC2. Previous work has demonstrated the rapid evolution of strong monsoon circulation biases over India from seasonal forecasts initialised in early May, together with coupled strong easterly wind biases on the equator. These mean state biases lead to strong precipitation errors during the monsoon over the subcontinent. We analyse a set of three springtime start dates for the 20-year hindcast period (1992-2011) and fifteen total ensemble members for each year. We use comparisons with variety of observations to assess the evolution of the mean state biases over the Indian land surface. All biases within the model develop rapidly, particularly surface heat and radiation flux biases. Strong biases are present within the model climatology from pre-monsoon (May) in the surface heat fluxes over India (higher sensible / lower latent heat fluxes) when compared to observed estimates. The early evolution of such biases prior to onset rains suggests possible problems with the land surface scheme or soil moisture errors. Further analysis of soil moisture over the Indian land surface shows a dry bias present from the beginning of the hindcasts during the pre-monsoon. This lasts until the after the monsoon develops (July) after which there is a wet bias over the region. Soil moisture used for initialization of the model also shows a dry bias when compared against the observed estimates, which may lead to the same in the model. The early dry bias in the model may reduce local moisture availability through surface evaporation and thus may possibly limit precipitation recycling. On this premise, we identify and test the sensitivity of the monsoon in the model against higher soil moisture forcing. We run sensitivity experiments initiated using gridpoint-wise annual soil moisture maxima over the Indian land surface as input for experiments in the atmosphere-only version of the model. We plan to analyse the response of the sensitivity experiments on seasonal forecasting of surface heat fluxes and subsequently monsoon precipitation.

  6. VIIRS/J1 polarization narrative

    NASA Astrophysics Data System (ADS)

    Waluschka, Eugene; McCorkel, Joel; McIntire, Jeff; Moyer, David; McAndrew, Brendan; Brown, Steven W.; Lykke, Keith R.; Young, James B.; Fest, Eric; Butler, James; Wang, Tung R.; Monroy, Eslim O.; Turpie, Kevin; Meister, Gerhard; Thome, Kurtis J.

    2015-09-01

    The polarization sensitivity of the Visible/NearIR (VISNIR) bands in the Joint Polar Satellite Sensor 1 (J1) Visible Infrared Imaging Radiometer Suite (VIIRS) instrument was measured using a broadband source. While polarization sensitivity for bands M5-M7, I1, and I2 was less than 2.5 %, the maximum polarization sensitivity for bands M1, M2, M3, and M4 was measured to be 6.4 %, 4.4 %, 3.1 %, and 4.3 %, respectively with a polarization characterization uncertainty of less than 0.38%. A detailed polarization model indicated that the large polarization sensitivity observed in the M1 to M4 bands is mainly due to the large polarization sensitivity introduced at the leading and trailing edges of the newly manufactured VISNIR bandpass focal plane filters installed in front of the VISNIR detectors. This was confirmed by polarization measurements of bands M1 and M4 bands using monochromatic light. Discussed are the activities leading up to and including the two polarization tests, some discussion of the polarization model and the model results, the role of the focal plane filters, the polarization testing of the Aft-Optics-Assembly, the testing of the polarizers at the National Aeronautics and Space Administration's (NASA) Goddard center and at the National Institute of Science and Technology (NIST) facility and the use of NIST's Traveling Spectral Irradiance and Radiance responsivity Calibrations using Uniform Sources (T-SIRCUS) for polarization testing and associated analyses and results.

  7. Cost-effectiveness of renin-guided treatment of hypertension.

    PubMed

    Smith, Steven M; Campbell, Jonathan D

    2013-11-01

    A plasma renin activity (PRA)-guided strategy is more effective than standard care in treating hypertension (HTN). However, its clinical implementation has been slow, presumably due in part to economic concerns. We estimated the cost effectiveness of a PRA-guided treatment strategy compared with standard care in a treated but uncontrolled HTN population. We estimated costs, quality-adjusted life years (QALYs), and the incremental cost-effectiveness ratio (ICER) of PRA-guided therapy compared to standard care using a state-transition simulation model with alternate patient characteristic scenarios and sensitivity analyses. Patient-specific inputs for the base case scenario, males average age 63 years, reflected best available data from a recent clinical trial of PRA-guided therapy. Transition probabilities were estimated using Framingham risk equations or derived from the literature; costs and utilities were derived from the literature. In the base case scenario for males, the lifetime discounted costs and QALYs were $23,648 and 12.727 for PRA-guided therapy and $22,077 and 12.618 for standard care, respectively. The base case ICER was $14,497/QALY gained. In alternative scenario analyses varying patient input parameters, the results were sensitive to age, gender, baseline systolic blood pressure, and the addition of cardiovascular risk factors. Univariate sensitivity analyses demonstrated that results were most sensitive to varying the treatment effect of PRA-guided therapy and the cost of the PRA test. Our results suggest that PRA-guided therapy compared with standard care increases QALYs and medical costs in most scenarios. PRA-guided therapy appears to be most cost effective in younger persons and those with more cardiovascular risk factors. © American Journal of Hypertension, Ltd 2013. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Pharmacoeconomic analysis of antifungal therapy for primary treatment of invasive candidiasis caused by Candida albicans and non-albicans Candida species.

    PubMed

    Ou, Huang-Tz; Lee, Tsung-Ying; Chen, Yee-Chun; Charbonneau, Claudie

    2017-07-10

    Cost-effectiveness studies of echinocandins for the treatment of invasive candidiasis, including candidemia, are rare in Asia. No study has determined whether echinocandins are cost-effective for both Candida albicans and non-albicans Candida species. There have been no economic evaluations that compare non-echinocandins with the three available echinocandins. This study was aimed to assess the cost-effectiveness of individual echinocandins, namely caspofungin, micafungin, and anidulafungin, versus non-echinocandins for C. albicans and non-albicans Candida species, respectively. A decision tree model was constructed to assess the cost-effectiveness of echinocandins and non-echinocandins for invasive candidiasis. The probability of treatment success, mortality rate, and adverse drug events were extracted from published clinical trials. The cost variables (i.e., drug acquisition) were based on Taiwan's healthcare system from the perspective of a medical payer. One-way sensitivity analyses and probability sensitivity analyses were conducted. For treating invasive candidiasis (all species), as compared to fluconazole, micafungin and caspofungin are dominated (less effective, more expensive), whereas anidulafungin is cost-effective (more effective, more expensive), costing US$3666.09 for each life-year gained, which was below the implicit threshold of the incremental cost-effectiveness ratio in Taiwan. For C. albicans, echinocandins are cost-saving as compared to non-echinocandins. For non-albicans Candida species, echinocandins are cost-effective as compared to non-echinocandins, costing US$652 for each life-year gained. The results were robust over a wide range of sensitivity analyses and were most sensitive to the clinical efficacy of antifungal treatment. Echinocandins, especially anidulafungin, appear to be cost-effective for invasive candidiasis caused by C. albicans and non-albicans Candida species in Taiwan.

  9. Subgroup Economic Evaluation of Radiotherapy for Breast Cancer After Mastectomy.

    PubMed

    Wan, Xiaomin; Peng, Liubao; Ma, Jinan; Chen, Gannong; Li, Yuanjian

    2015-11-01

    A recent meta-analysis by the Early Breast Cancer Trialists' Collaborative Group found significant improvements achieved by postmastectomy radiotherapy (PMRT) for patients with breast cancer with 1 to 3 positive nodes (pN1-3). It is unclear whether PMRT is cost-effective for subgroups of patients with positive nodes. To determine the cost-effectiveness of PMRT for subgroups of patients with breast cancer with positive nodes. A semi-Markov model was constructed to estimate the expected lifetime costs, life expectancy, and quality-adjusted life-years for patients receiving or not receiving radiation therapy. Clinical and health utilities data were from meta-analyses by the Early Breast Cancer Trialists' Collaborative Group or randomized clinical trials. Costs were estimated from the perspective of the Chinese society. One-way and probabilistic sensitivity analyses were performed. The incremental cost-effective ratio was estimated as $7984, $4043, $3572, and $19,021 per quality-adjusted life-year for patients with positive nodes (pN+), patients with pN1-3, patients with pN1-3 who received systemic therapy, and patients with >4 positive nodes (pN4+), respectively. According to World Health Organization recommendations, these incremental cost-effective ratios were judged as cost-effective. However, the results of one-way sensitivity analyses suggested that the results were highly sensitive to the relative effectiveness of PMRT (rate ratio). We determined that the results were highly sensitive to the rate ratio. However, the addition of PMRT for patients with pN1-3 in China has a reasonable chance to be cost-effective and may be judged as an efficient deployment of limited health resource, and the risk and uncertainty of PMRT are relatively greater for patients with pN4+. Copyright © 2015 Elsevier HS Journals, Inc. All rights reserved.

  10. A Risk-Based Approach for Aerothermal/TPS Analysis and Testing

    NASA Technical Reports Server (NTRS)

    Wright, Michael J.; Grinstead, Jay H.; Bose, Deepak

    2007-01-01

    The current status of aerothermal and thermal protection system modeling for civilian entry missions is reviewed. For most such missions, the accuracy of our simulations is limited not by the tools and processes currently employed, but rather by reducible deficiencies in the underlying physical models. Improving the accuracy of and reducing the uncertainties in these models will enable a greater understanding of the system level impacts of a particular thermal protection system and of the system operation and risk over the operational life of the system. A strategic plan will be laid out by which key modeling deficiencies can be identified via mission-specific gap analysis. Once these gaps have been identified, the driving component uncertainties are determined via sensitivity analyses. A Monte-Carlo based methodology is presented for physics-based probabilistic uncertainty analysis of aerothermodynamics and thermal protection system material response modeling. These data are then used to advocate for and plan focused testing aimed at reducing key uncertainties. The results of these tests are used to validate or modify existing physical models. Concurrently, a testing methodology is outlined for thermal protection materials. The proposed approach is based on using the results of uncertainty/sensitivity analyses discussed above to tailor ground testing so as to best identify and quantify system performance and risk drivers. A key component of this testing is understanding the relationship between the test and flight environments. No existing ground test facility can simultaneously replicate all aspects of the flight environment, and therefore good models for traceability to flight are critical to ensure a low risk, high reliability thermal protection system design. Finally, the role of flight testing in the overall thermal protection system development strategy is discussed.

  11. Baby Budgeting: Oocyte Cryopreservation in Women Delaying Reproduction Can Reduce Cost per Live Birth

    PubMed Central

    Devine, Kate; Mumford, Sunni L.; Goldman, Kara N.; Hodes-Wertz, Brooke; Druckenmiller, Sarah; Propst, Anthony M.; Noyes, Nicole

    2015-01-01

    Objective To determine whether oocyte cryopreservation (OC) for deferred reproduction is cost-effective per live birth using a model constructed from observed clinical practice. Design Decision-tree mathematical model with sensitivity analyses. Setting Not applicable. Patients A simulated cohort of women wishing to delay childbearing until age 40 years. Interventions Not applicable. Main Outcome Measure Cost per live birth. Results Our primary model predicted that OC at age 35 years by women planning to defer pregnancy attempts until age 40 would decrease cost per live birth to $39,946 (and increase odds of live birth to 62% by the end of the model),indicating OC to be a cost-effective strategy relative to forgoing OC, which was associated with a predicted cost per live birth of $55,060 (and 42% chance of live birth). If fresh autologous ART was added at age 40 prior to thawing oocytes, 74% obtained a live birth, though at an increased cost of $61,887. Separate sensitivity analyses demonstrated that OC remained cost-effective so long as patients underwent OC prior to age 38, more than 49% of those not obtaining a spontaneously conceived live birth returned to thaw oocytes, and likelihood of obtaining a spontaneously conceived live birth after six months’ attempts at age 40 was less than 35%. Conclusions In women who plan to delay childbearing until age 40, oocyte cryopreservation before 38 years of age reduces the cost to obtain a live birth. PMID:25813281

  12. Model tests and numerical analyses on horizontal impedance functions of inclined single piles embedded in cohesionless soil

    NASA Astrophysics Data System (ADS)

    Goit, Chandra Shekhar; Saitoh, Masato

    2013-03-01

    Horizontal impedance functions of inclined single piles are measured experimentally for model soil-pile systems with both the effects of local soil nonlinearity and resonant characteristics. Two practical pile inclinations of 5° and 10° in addition to a vertical pile embedded in cohesionless soil and subjected to lateral harmonic pile head loadings for a wide range of frequencies are considered. Results obtained with low-to-high amplitude of lateral loadings on model soil-pile systems encased in a laminar shear box show that the local nonlinearities have a profound impact on the horizontal impedance functions of piles. Horizontal impedance functions of inclined piles are found to be smaller than the vertical pile and the values decrease as the angle of pile inclination increases. Distinct values of horizontal impedance functions are obtained for the `positive' and `negative' cycles of harmonic loadings, leading to asymmetric force-displacement relationships for the inclined piles. Validation of these experimental results is carried out through three-dimensional nonlinear finite element analyses, and the results from the numerical models are in good agreement with the experimental data. Sensitivity analyses conducted on the numerical models suggest that the consideration of local nonlinearity at the vicinity of the soil-pile interface influence the response of the soil-pile systems.

  13. Monte Carlo simulations for generic granite repository studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chu, Shaoping; Lee, Joon H; Wang, Yifeng

    In a collaborative study between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL) for the DOE-NE Office of Fuel Cycle Technologies Used Fuel Disposition (UFD) Campaign project, we have conducted preliminary system-level analyses to support the development of a long-term strategy for geologic disposal of high-level radioactive waste. A general modeling framework consisting of a near- and a far-field submodel for a granite GDSE was developed. A representative far-field transport model for a generic granite repository was merged with an integrated systems (GoldSim) near-field model. Integrated Monte Carlo model runs with the combined near- and farfield transport modelsmore » were performed, and the parameter sensitivities were evaluated for the combined system. In addition, a sub-set of radionuclides that are potentially important to repository performance were identified and evaluated for a series of model runs. The analyses were conducted with different waste inventory scenarios. Analyses were also conducted for different repository radionuelide release scenarios. While the results to date are for a generic granite repository, the work establishes the method to be used in the future to provide guidance on the development of strategy for long-term disposal of high-level radioactive waste in a granite repository.« less

  14. Probabilistic modeling of percutaneous absorption for risk-based exposure assessments and transdermal drug delivery.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ho, Clifford Kuofei

    Chemical transport through human skin can play a significant role in human exposure to toxic chemicals in the workplace, as well as to chemical/biological warfare agents in the battlefield. The viability of transdermal drug delivery also relies on chemical transport processes through the skin. Models of percutaneous absorption are needed for risk-based exposure assessments and drug-delivery analyses, but previous mechanistic models have been largely deterministic. A probabilistic, transient, three-phase model of percutaneous absorption of chemicals has been developed to assess the relative importance of uncertain parameters and processes that may be important to risk-based assessments. Penetration routes through the skinmore » that were modeled include the following: (1) intercellular diffusion through the multiphase stratum corneum; (2) aqueous-phase diffusion through sweat ducts; and (3) oil-phase diffusion through hair follicles. Uncertainty distributions were developed for the model parameters, and a Monte Carlo analysis was performed to simulate probability distributions of mass fluxes through each of the routes. Sensitivity analyses using stepwise linear regression were also performed to identify model parameters that were most important to the simulated mass fluxes at different times. This probabilistic analysis of percutaneous absorption (PAPA) method has been developed to improve risk-based exposure assessments and transdermal drug-delivery analyses, where parameters and processes can be highly uncertain.« less

  15. A hydrologic drying bias in water-resource impact analyses of anthropogenic climate change

    USGS Publications Warehouse

    Milly, Paul; Dunne, Krista A.

    2017-01-01

    For water-resource planning, sensitivity of freshwater availability to anthropogenic climate change (ACC) often is analyzed with “offline” hydrologic models that use precipitation and potential evapotranspiration (Ep) as inputs. Because Ep is not a climate-model output, an intermediary model of Ep must be introduced to connect the climate model to the hydrologic model. Several Ep methods are used. The suitability of each can be assessed by noting a credible Ep method for offline analyses should be able to reproduce climate models’ ACC-driven changes in actual evapotranspiration in regions and seasons of negligible water stress (Ew). We quantified this ability for seven commonly used Ep methods and for a simple proportionality with available energy (“energy-only” method). With the exception of the energy-only method, all methods tend to overestimate substantially the increase in Ep associated with ACC. In an offline hydrologic model, the Ep-change biases produce excessive increases in actual evapotranspiration (E), whether the system experiences water stress or not, and thence strong negative biases in runoff change, as compared to hydrologic fluxes in the driving climate models. The runoff biases are comparable in magnitude to the ACC-induced runoff changes themselves. These results suggest future hydrologic drying (wetting) trends likely are being systematically and substantially overestimated (underestimated) in many water-resource impact analyses.

  16. Economic evaluation of ezetimibe treatment in combination with statin therapy in the United States.

    PubMed

    Davies, Glenn M; Vyas, Ami; Baxter, Carl A

    2017-07-01

    This study assessed the cost-effectiveness of ezetimibe with statin therapy vs statin monotherapy from a US payer perspective, assuming the impending patent expiration of ezetimibe. A Markov-like economic model consisting of 28 distinct health states was used. Model population data were obtained from US linked claims and electronic medical records, with inclusion criteria based on diagnostic guidelines. Inputs came from recent clinical trials, meta-analyses, and cost-effectiveness analyses. The base-case scenario was used to evaluate the cost-effectiveness of adding ezetimibe 10 mg to statin in patients aged 35-74 years with a history of coronary heart disease (CHD) and/or stroke, and with low-density lipoprotein cholesterol (LDL-C) levels ≥70 mg/dL over a lifetime horizon, assuming a 90% price reduction of ezetimibe after 1 year to take into account the impending patent expiration in the second quarter of 2017. Sub-group analyses included patients with LDL-C levels ≥100 mg/dL and patients with diabetes with LDL-C levels ≥70 mg/dL. The lifetime discounted incremental cost-effectiveness ratio (ICER) for ezetimibe added to statin was $9,149 per quality-adjusted life year (QALY) for the base-case scenario. For patients with LDL-C levels ≥100 mg/dL, the ICER was $839/QALY; for those with diabetes and LDL-C levels ≥70 mg/dL, it was $560/QALY. One-way sensitivity analyses showed that the model was sensitive to changes in cost of ezetimibe, rate reduction of non-fatal CHD, and utility weight for non-fatal CHD in the base-case and sub-group analyses. Indirect costs or treatment discontinuation estimation were not included. Compared with statin monotherapy, ezetimibe with statin therapy was cost-effective for secondary prevention of CHD and stroke and for primary prevention of these conditions in patients whose LDL-C levels are ≥100 mg/dL and in patients with diabetes, taking into account a 90% cost reduction for ezetimibe.

  17. Simplified models for dark matter face their consistent completions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonçalves, Dorival; Machado, Pedro A. N.; No, Jose Miguel

    Simplified dark matter models have been recently advocated as a powerful tool to exploit the complementarity between dark matter direct detection, indirect detection and LHC experimental probes. Focusing on pseudoscalar mediators between the dark and visible sectors, we show that the simplified dark matter model phenomenology departs significantly from that of consistentmore » $${SU(2)_{\\mathrm{L}} \\times U(1)_{\\mathrm{Y}}}$$ gauge invariant completions. We discuss the key physics simplified models fail to capture, and its impact on LHC searches. Notably, we show that resonant mono-Z searches provide competitive sensitivities to standard mono-jet analyses at $13$ TeV LHC.« less

  18. Satellite broadcasting system study

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The study to develop a system model and computer program representative of broadcasting satellite systems employing community-type receiving terminals is reported. The program provides a user-oriented tool for evaluating performance/cost tradeoffs, synthesizing minimum cost systems for a given set of system requirements, and performing sensitivity analyses to identify critical parameters and technology. The performance/ costing philosophy and what is meant by a minimum cost system is shown graphically. Topics discussed include: main line control program, ground segment model, space segment model, cost models and launch vehicle selection. Several examples of minimum cost systems resulting from the computer program are presented. A listing of the computer program is also included.

  19. Cost-effectiveness of minimally invasive sacroiliac joint fusion.

    PubMed

    Cher, Daniel J; Frasco, Melissa A; Arnold, Renée Jg; Polly, David W

    2016-01-01

    Sacroiliac joint (SIJ) disorders are common in patients with chronic lower back pain. Minimally invasive surgical options have been shown to be effective for the treatment of chronic SIJ dysfunction. To determine the cost-effectiveness of minimally invasive SIJ fusion. Data from two prospective, multicenter, clinical trials were used to inform a Markov process cost-utility model to evaluate cumulative 5-year health quality and costs after minimally invasive SIJ fusion using triangular titanium implants or non-surgical treatment. The analysis was performed from a third-party perspective. The model specifically incorporated variation in resource utilization observed in the randomized trial. Multiple one-way and probabilistic sensitivity analyses were performed. SIJ fusion was associated with a gain of approximately 0.74 quality-adjusted life years (QALYs) at a cost of US$13,313 per QALY gained. In multiple one-way sensitivity analyses all scenarios resulted in an incremental cost-effectiveness ratio (ICER) <$26,000/QALY. Probabilistic analyses showed a high degree of certainty that the maximum ICER for SIJ fusion was less than commonly selected thresholds for acceptability (mean ICER =$13,687, 95% confidence interval $5,162-$28,085). SIJ fusion provided potential cost savings per QALY gained compared to non-surgical treatment after a treatment horizon of greater than 13 years. Compared to traditional non-surgical treatments, SIJ fusion is a cost-effective, and, in the long term, cost-saving strategy for the treatment of SIJ dysfunction due to degenerative sacroiliitis or SIJ disruption.

  20. Cost-effectiveness of drug-eluting stents versus bare-metal stents in patients undergoing percutaneous coronary intervention.

    PubMed

    Baschet, Louise; Bourguignon, Sandrine; Marque, Sébastien; Durand-Zaleski, Isabelle; Teiger, Emmanuel; Wilquin, Fanny; Levesque, Karine

    2016-01-01

    To determine the cost-effectiveness of drug-eluting stents (DES) compared with bare-metal stents (BMS) in patients requiring a percutaneous coronary intervention in France, using a recent meta-analysis including second-generation DES. A cost-effectiveness analysis was performed in the French National Health Insurance setting. Effectiveness settings were taken from a meta-analysis of 117 762 patient-years with 76 randomised trials. The main effectiveness criterion was major cardiac event-free survival. Effectiveness and costs were modelled over a 5-year horizon using a three-state Markov model. Incremental cost-effectiveness ratios and a cost-effectiveness acceptability curve were calculated for a range of thresholds for willingness to pay per year without major cardiac event gain. Deterministic and probabilistic sensitivity analyses were performed. Base case results demonstrated that DES are dominant over BMS, with an increase in event-free survival and a cost-reduction of €184, primarily due to a diminution of second revascularisations, and an absence of myocardial infarction and stent thrombosis. These results are robust for uncertainty on one-way deterministic and probabilistic sensitivity analyses. Using a cost-effectiveness threshold of €7000 per major cardiac event-free year gained, DES has a >95% probability of being cost-effective versus BMS. Following DES price decrease, new-generation DES development and taking into account recent meta-analyses results, the DES can now be considered cost-effective regardless of selective indication in France, according to European recommendations.

  1. Cost-effectiveness of minimally invasive sacroiliac joint fusion

    PubMed Central

    Cher, Daniel J; Frasco, Melissa A; Arnold, Renée JG; Polly, David W

    2016-01-01

    Background Sacroiliac joint (SIJ) disorders are common in patients with chronic lower back pain. Minimally invasive surgical options have been shown to be effective for the treatment of chronic SIJ dysfunction. Objective To determine the cost-effectiveness of minimally invasive SIJ fusion. Methods Data from two prospective, multicenter, clinical trials were used to inform a Markov process cost-utility model to evaluate cumulative 5-year health quality and costs after minimally invasive SIJ fusion using triangular titanium implants or non-surgical treatment. The analysis was performed from a third-party perspective. The model specifically incorporated variation in resource utilization observed in the randomized trial. Multiple one-way and probabilistic sensitivity analyses were performed. Results SIJ fusion was associated with a gain of approximately 0.74 quality-adjusted life years (QALYs) at a cost of US$13,313 per QALY gained. In multiple one-way sensitivity analyses all scenarios resulted in an incremental cost-effectiveness ratio (ICER) <$26,000/QALY. Probabilistic analyses showed a high degree of certainty that the maximum ICER for SIJ fusion was less than commonly selected thresholds for acceptability (mean ICER =$13,687, 95% confidence interval $5,162–$28,085). SIJ fusion provided potential cost savings per QALY gained compared to non-surgical treatment after a treatment horizon of greater than 13 years. Conclusion Compared to traditional non-surgical treatments, SIJ fusion is a cost-effective, and, in the long term, cost-saving strategy for the treatment of SIJ dysfunction due to degenerative sacroiliitis or SIJ disruption. PMID:26719717

  2. High sensitivity of tidewater outlet glacier dynamics to shape

    NASA Astrophysics Data System (ADS)

    Enderlin, E. M.; Howat, I. M.; Vieli, A.

    2013-02-01

    Variability in tidewater outlet glacier behavior under similar external forcing has been attributed to differences in outlet shape (i.e. bed elevation and width), but this dependence has not been investigated in detail. Here we use a numerical ice flow model to show that the dynamics of tidewater outlet glaciers under external forcing are highly sensitive to width and bed topography. Our sensitivity tests indicate that for glaciers with similar discharge, the trunks of wider glaciers and those grounded over deeper basal depressions tend to be closer to flotation, so that less dynamically induced thinning results in rapid, unstable retreat following a perturbation. The lag time between the onset of the perturbation and unstable retreat varies with outlet shape, which may help explain intra-regional variability in tidewater outlet glacier behavior. Further, because the perturbation response is dependent on the thickness relative to flotation, varying the bed topography within the range of observational uncertainty can result in either stable or unstable retreat due to the same perturbation. Thus, extreme care must be taken when interpreting the future behavior of actual glacier systems using numerical ice flow models that are not accompanied by comprehensive sensitivity analyses.

  3. High sensitivity of tidewater outlet glacier dynamics to shape

    NASA Astrophysics Data System (ADS)

    Enderlin, E. M.; Howat, I. M.; Vieli, A.

    2013-06-01

    Variability in tidewater outlet glacier behavior under similar external forcing has been attributed to differences in outlet shape (i.e., bed elevation and width), but this dependence has not been investigated in detail. Here we use a numerical ice flow model to show that the dynamics of tidewater outlet glaciers under external forcing are highly sensitive to width and bed topography. Our sensitivity tests indicate that for glaciers with similar discharge, the trunks of wider glaciers and those grounded over deeper basal depressions tend to be closer to flotation, so that less dynamically induced thinning results in rapid, unstable retreat following a perturbation. The lag time between the onset of the perturbation and unstable retreat varies with outlet shape, which may help explain intra-regional variability in tidewater outlet glacier behavior. Further, because the perturbation response is dependent on the thickness relative to flotation, varying the bed topography within the range of observational uncertainty can result in either stable or unstable retreat due to the same perturbation. Thus, extreme care must be taken when interpreting the future behavior of actual glacier systems using numerical ice flow models that are not accompanied by comprehensive sensitivity analyses.

  4. Earth system sensitivity inferred from Pliocene modelling and data

    USGS Publications Warehouse

    Lunt, D.J.; Haywood, A.M.; Schmidt, G.A.; Salzmann, U.; Valdes, P.J.; Dowsett, H.J.

    2010-01-01

    Quantifying the equilibrium response of global temperatures to an increase in atmospheric carbon dioxide concentrations is one of the cornerstones of climate research. Components of the Earths climate system that vary over long timescales, such as ice sheets and vegetation, could have an important effect on this temperature sensitivity, but have often been neglected. Here we use a coupled atmosphere-ocean general circulation model to simulate the climate of the mid-Pliocene warm period (about three million years ago), and analyse the forcings and feedbacks that contributed to the relatively warm temperatures. Furthermore, we compare our simulation with proxy records of mid-Pliocene sea surface temperature. Taking these lines of evidence together, we estimate that the response of the Earth system to elevated atmospheric carbon dioxide concentrations is 30-50% greater than the response based on those fast-adjusting components of the climate system that are used traditionally to estimate climate sensitivity. We conclude that targets for the long-term stabilization of atmospheric greenhouse-gas concentrations aimed at preventing a dangerous human interference with the climate system should take into account this higher sensitivity of the Earth system. ?? 2010 Macmillan Publishers Limited. All rights reserved.

  5. Bearing tester data compilation, analysis, and reporting and bearing math modeling

    NASA Technical Reports Server (NTRS)

    1986-01-01

    A test condition data base was developed for the Bearing and Seal Materials Tester (BSMT) program which permits rapid retrieval of test data for trend analysis and evaluation. A model was developed for the Space shuttle Main Engine (SSME) Liquid Oxygen (LOX) turbopump shaft/bearing system. The model was used to perform parametric analyses to determine the sensitivity of bearing operating characteristics and temperatures to variations in: axial preload, contact friction, coolant flow and subcooling, heat transfer coefficients, outer race misalignments, and outer race to isolator clearances. The bearing program ADORE (Advanced Dynamics of Rolling Elements) was installed on the UNIVAC 1100/80 computer system and is operational. ADORE is an advanced FORTRAN computer program for the real time simulation of the dynamic performance of rolling bearings. A model of the 57 mm turbine-end bearing is currently being checked out. Analyses were conducted to estimate flow work energy for several flow diverter configurations and coolant flow rates for the LOX BSMT.

  6. Single walled boron nitride nanotube-based biosensor: an atomistic finite element modelling approach.

    PubMed

    Panchal, Mitesh B; Upadhyay, Sanjay H

    2014-09-01

    The unprecedented dynamic characteristics of nanoelectromechanical systems make them suitable for nanoscale mass sensing applications. Owing to superior biocompatibility, boron nitride nanotubes (BNNTs) are being increasingly used for such applications. In this study, the feasibility of single walled BNNT (SWBNNT)-based bio-sensor has been explored. Molecular structural mechanics-based finite element (FE) modelling approach has been used to analyse the dynamic behaviour of SWBNNT-based biosensors. The application of an SWBNNT-based mass sensing for zeptogram level of mass has been reported. Also, the effect of size of the nanotube in terms of length as well as different chiral atomic structures of SWBNNT has been analysed for their sensitivity analysis. The vibrational behaviour of SWBNNT has been analysed for higher-order modes of vibrations to identify the intermediate landing position of biological object of zeptogram scale. The present molecular structural mechanics-based FE modelling approach is found to be very effectual to incorporate different chiralities of the atomic structures. Also, different boundary conditions can be effectively simulated using the present approach to analyse the dynamic behaviour of the SWBNNT-based mass sensor. The presented study has explored the potential of SWBNNT, as a nanobiosensor having the capability of zeptogram level mass sensing.

  7. Modern Perspectives on Numerical Modeling of Cardiac Pacemaker Cell

    PubMed Central

    Maltsev, Victor A.; Yaniv, Yael; Maltsev, Anna V.; Stern, Michael D.; Lakatta, Edward G.

    2015-01-01

    Cardiac pacemaking is a complex phenomenon that is still not completely understood. Together with experimental studies, numerical modeling has been traditionally used to acquire mechanistic insights in this research area. This review summarizes the present state of numerical modeling of the cardiac pacemaker, including approaches to resolve present paradoxes and controversies. Specifically we discuss the requirement for realistic modeling to consider symmetrical importance of both intracellular and cell membrane processes (within a recent “coupled-clock” theory). Promising future developments of the complex pacemaker system models include the introduction of local calcium control, mitochondria function, and biochemical regulation of protein phosphorylation and cAMP production. Modern numerical and theoretical methods such as multi-parameter sensitivity analyses within extended populations of models and bifurcation analyses are also important for the definition of the most realistic parameters that describe a robust, yet simultaneously flexible operation of the coupled-clock pacemaker cell system. The systems approach to exploring cardiac pacemaker function will guide development of new therapies, such as biological pacemakers for treating insufficient cardiac pacemaker function that becomes especially prevalent with advancing age. PMID:24748434

  8. Effects of winglet on transonic flutter characteristics of a cantilevered twin-engine-transport wing model

    NASA Technical Reports Server (NTRS)

    Ruhlin, C. L.; Bhatia, K. G.; Nagaraja, K. S.

    1986-01-01

    A transonic model and a low-speed model were flutter tested in the Langley Transonic Dynamics Tunnel at Mach numbers up to 0.90. Transonic flutter boundaries were measured for 10 different model configurations, which included variations in wing fuel, nacelle pylon stiffness, and wingtip configuration. The winglet effects were evaluated by testing the transonic model, having a specific wing fuel and nacelle pylon stiffness, with each of three wingtips, a nonimal tip, a winglet, and a nominal tip ballasted to simulate the winglet mass. The addition of the winglet substantially reduced the flutter speed of the wing at transonic Mach numbers. The winglet effect was configuration-dependent and was primarily due to winglet aerodynamics rather than mass. Flutter analyses using modified strip-theory aerodynamics (experimentally weighted) correlated reasonably well with test results. The four transonic flutter mechanisms predicted by analysis were obtained experimentally. The analysis satisfactorily predicted the mass-density-ratio effects on subsonic flutter obtained using the low-speed model. Additional analyses were made to determine the flutter sensitivity to several parameters at transonic speeds.

  9. [Factor structure of the German version of the BIS/BAS Scales in a population-based sample].

    PubMed

    Müller, A; Smits, D; Claes, L; de Zwaan, M

    2013-02-01

    The Behavioural Inhibition System/Behavioural Activation System Scale (BIS/BAS-Scales) developed by Carver and White 1 is a self-rating instrument to assess the dispositional sensitivity to punishment and reward. The present work aims to examine the factor structure of the German version of the BIS/BAS-Scales. In a large German population-based sample (n = 1881) the model fit of several factor models was tested by using confirmatory factor analyses. The best model fit was found for the 5-factor model with two BIS (anxiety, fear) and three BAS (drive, reward responsiveness, fun seeking) scales, whereas the BIS-fear, the BAS-reward responsiveness, and the BAS-fun seeking subscales showed low internal consistency. The BIS/BAS scales were negatively correlated with age, and women reported higher BIS subscale scores than men. Confirmatory factor analyses suggest a 5-factor model. However, due to the low internal reliability of some of the subscales the use of this model is questionable. © Georg Thieme Verlag KG Stuttgart · New York.

  10. Cost-effectiveness of treating resistant hypertension with an implantable carotid body stimulator

    PubMed Central

    Young, KC; Teeters, JC; Benesch, CG; Bisognano, JD; Illig, KA

    2013-01-01

    Introduction The purposes of this study are to investigate the cost-effectiveness of an implantable carotid body stimulator (Rheos®) for treating resistant hypertension and determine the range of starting systolic blood pressure (SBP) values where the device remains cost-effective. Methods A Markov model compared a 20 mmHg drop in SBP from an initial level of 180 with Rheos® to failed medical management in a hypothetical 50-year old cohort. Direct costs (2007$), utilities and event rates for future myocardial infarction, stroke, heart failure and end-stage renal disease were modeled. Sensitivity analyses tested the assumptions in the model. Results The incremental cost-effectiveness ratio (ICER) for Rheos® was $64,400 per quality-adjusted life-year (QALY) using Framingham-derived event probabilities. The ICER was <$100,000/QALY for SBPs ≥142. A probability of device removal of <1% per year or SBP reductions of ≥24 mmHg were variables that decreased the ICER below $50,000/QALY. For cohort characteristics similar to ASCOT-BPLA trial participants, the ICER became $26,700/QALY. Two-way sensitivity analyses demonstrated that lowering SBP 12 mmHg from 220 or 21 mmHg from 140 were required. Conclusions Rheos® may be cost-effective, with an ICER between $50,000-$100,000/QALY. Cohort characteristics and efficacy are key to the cost-effectiveness of new therapies for resistant hypertension. PMID:19817936

  11. Cost-effectiveness analysis of the use of high-flow oxygen through nasal cannula in intensive care units in NHS England.

    PubMed

    Eaton Turner, Emily; Jenks, Michelle

    2018-06-01

    To estimate the cost-effectiveness of Nasal High Flow (NHF) in the intensive care unit (ICU) compared with standard oxygen or non-invasive ventilation (NIV) from a UK NHS perspective. Three cost-effectiveness models were developed to reflect scenarios of NHF use: first-line therapy (pre-intubation model); post-extubation in low-risk, and high-risk patients. All models used randomized control trial data on the incidence of intubation/re-intubation, events leading to intubation/re-intubation, mortality and complications. NHS reference costs were primarily used. Sensitivity analyses were conducted. When used as first-line therapy, Optiflow™ NHF gives an estimated cost-saving of £469 per patient compared with standard oxygen and £611 versus NIV. NHF cost-savings for high severity sub-group were £727 versus standard oxygen, and £1,011 versus NIV. For low-risk post-intubation patients, NHF generates estimated cost-saving of £156 versus standard oxygen. NHF decreases the number of re-intubations required in these scenarios. Results were robust in most sensitivity analyses. For high-risk post-intubation patients, NHF cost-savings were £104 versus NIV. NHF results in a non-significant increase in re-intubations required. However, reduction in respiratory failure offsets this. For patients in ICU who are at risk of intubation or re-intubation, NHF cannula is likely to be cost-saving.

  12. The modeled cost-effectiveness of family-based and adolescent-focused treatment for anorexia nervosa.

    PubMed

    Le, Long Khanh-Dao; Barendregt, Jan J; Hay, Phillipa; Sawyer, Susan M; Hughes, Elizabeth K; Mihalopoulos, Cathrine

    2017-12-01

    Anorexia nervosa (AN) is a prevalent, serious mental disorder. We aimed to evaluate the cost-effectiveness of family-based treatment (FBT) compared to adolescent-focused individual therapy (AFT) or no intervention within the Australian healthcare system. A Markov model was developed to estimate the cost and disability-adjusted life-year (DALY) averted of FBT relative to comparators over 6 years from the health system perspective. The target population was 11-18 year olds with AN of relatively short duration. Uncertainty and sensitivity analyses were conducted to test model assumptions. Results are reported as incremental cost-effectiveness ratios (ICER) in 2013 Australian dollars per DALY averted. FBT was less costly than AFT. Relative to no intervention, the mean ICER of FBT and AFT was $5,089 (95% uncertainty interval (UI): dominant to $16,659) and $51,897 ($21,591 to $1,712,491) per DALY averted. FBT and AFT are 100% and 45% likely to be cost-effective, respectively, at a threshold of AUD$50,000 per DALY averted. Sensitivity analyses indicated that excluding hospital costs led to increases in the ICERs but the conclusion of the study did not change. FBT is the most cost-effective among treatment arms, whereas AFT was not cost-effective compared to no intervention. Further research is required to verify this result. © 2017 Wiley Periodicals, Inc.

  13. Cost-Effectiveness of Diagnostic Strategies for Suspected Scaphoid Fractures.

    PubMed

    Yin, Zhong-Gang; Zhang, Jian-Bing; Gong, Ke-Tong

    2015-08-01

    The aim of this study was to assess the cost effectiveness of multiple competing diagnostic strategies for suspected scaphoid fractures. With published data, the authors created a decision-tree model simulating the diagnosis of suspected scaphoid fractures. Clinical outcomes, costs, and cost effectiveness of immediate computed tomography (CT), day 3 magnetic resonance imaging (MRI), day 3 bone scan, week 2 radiographs alone, week 2 radiographs-CT, week 2 radiographs-MRI, week 2 radiographs-bone scan, and immediate MRI were evaluated. The primary clinical outcome was the detection of scaphoid fractures. The authors adopted societal perspective, including both the costs of healthcare and the cost of lost productivity. The incremental cost-effectiveness ratio (ICER), which expresses the incremental cost per incremental scaphoid fracture detected using a strategy, was calculated to compare these diagnostic strategies. Base case analysis, 1-way sensitivity analyses, and "worst case scenario" and "best case scenario" sensitivity analyses were performed. In the base case, the average cost per scaphoid fracture detected with immediate CT was $2553. The ICER of immediate MRI and day 3 MRI compared with immediate CT was $7483 and $32,000 per scaphoid fracture detected, respectively. The ICER of week 2 radiographs-MRI was around $170,000. Day 3 bone scan, week 2 radiographs alone, week 2 radiographs-CT, and week 2 radiographs-bone scan strategy were dominated or extendedly dominated by MRI strategies. The results were generally robust in multiple sensitivity analyses. Immediate CT and MRI were the most cost-effective strategies for diagnosing suspected scaphoid fractures. Economic and Decision Analyses Level II. See Instructions for Authors for a complete description of levels of evidence.

  14. First- and Second-Line Bevacizumab in Addition to Chemotherapy for Metastatic Colorectal Cancer: A United States–Based Cost-Effectiveness Analysis

    PubMed Central

    Goldstein, Daniel A.; Chen, Qiushi; Ayer, Turgay; Howard, David H.; Lipscomb, Joseph; El-Rayes, Bassel F.; Flowers, Christopher R.

    2015-01-01

    Purpose The addition of bevacizumab to fluorouracil-based chemotherapy is a standard of care for previously untreated metastatic colorectal cancer. Continuation of bevacizumab beyond progression is an accepted standard of care based on a 1.4-month increase in median overall survival observed in a randomized trial. No United States–based cost-effectiveness modeling analyses are currently available addressing the use of bevacizumab in metastatic colorectal cancer. Our objective was to determine the cost effectiveness of bevacizumab in the first-line setting and when continued beyond progression from the perspective of US payers. Methods We developed two Markov models to compare the cost and effectiveness of fluorouracil, leucovorin, and oxaliplatin with or without bevacizumab in the first-line treatment and subsequent fluorouracil, leucovorin, and irinotecan with or without bevacizumab in the second-line treatment of metastatic colorectal cancer. Model robustness was addressed by univariable and probabilistic sensitivity analyses. Health outcomes were measured in life-years and quality-adjusted life-years (QALYs). Results Using bevacizumab in first-line therapy provided an additional 0.10 QALYs (0.14 life-years) at a cost of $59,361. The incremental cost-effectiveness ratio was $571,240 per QALY. Continuing bevacizumab beyond progression provided an additional 0.11 QALYs (0.16 life-years) at a cost of $39,209. The incremental cost-effectiveness ratio was $364,083 per QALY. In univariable sensitivity analyses, the variables with the greatest influence on the incremental cost-effectiveness ratio were bevacizumab cost, overall survival, and utility. Conclusion Bevacizumab provides minimal incremental benefit at high incremental cost per QALY in both the first- and second-line settings of metastatic colorectal cancer treatment. PMID:25691669

  15. Long-term fate of depleted uranium at Aberdeen and Yuma Proving Grounds: Human health and ecological risk assessments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ebinger, M.H.; Beckman, R.J.; Myers, O.B.

    1996-09-01

    The purpose of this study was to evaluate the immediate and long-term consequences of depleted uranium (DU) in the environment at Aberdeen Proving Ground (APG) and Yuma Proving Ground (YPG) for the Test and Evaluation Command (TECOM) of the US Army. Specifically, we examined the potential for adverse radiological and toxicological effects to humans and ecosystems caused by exposure to DU at both installations. We developed contaminant transport models of aquatic and terrestrial ecosystems at APG and terrestrial ecosystems at YPG to assess potential adverse effects from DU exposure. Sensitivity and uncertainty analyses of the initial models showed the portionsmore » of the models that most influenced predicted DU concentrations, and the results of the sensitivity analyses were fundamental tools in designing field sampling campaigns at both installations. Results of uranium (U) isotope analyses of field samples provided data to evaluate the source of U in the environment and the toxicological and radiological doses to different ecosystem components and to humans. Probabilistic doses were estimated from the field data, and DU was identified in several components of the food chain at APG and YPG. Dose estimates from APG data indicated that U or DU uptake was insufficient to cause adverse toxicological or radiological effects. Dose estimates from YPG data indicated that U or DU uptake is insufficient to cause radiological effects in ecosystem components or in humans, but toxicological effects in small mammals (e.g., kangaroo rats and pocket mice) may occur from U or DU ingestion. The results of this study were used to modify environmental radiation monitoring plans at APG and YPG to ensure collection of adequate data for ongoing ecological and human health risk assessments.« less

  16. Cost-Effectiveness of Treating Hepatitis C with Sofosbuvir/Ledipasvir in Germany.

    PubMed

    Stahmeyer, Jona T; Rossol, Siegbert; Liersch, Sebastian; Guerra, Ines; Krauth, Christian

    2017-01-01

    Infections with the hepatitis C virus (HCV) are a global public health problem. Long-term consequences are the development of liver cirrhosis and hepatocellular carcinoma. Newly introduced direct acting antivirals, especially interferon-free regimens, have improved rates of sustained viral response above 90% in most patient groups and allow treating patients who were ineligible for treatment in the past. These new regimens have replaced former treatment and are recommended by current guidelines. However, there is an ongoing discussion on high pharmaceutical prices. Our aim was to assess the long-term cost-effectiveness of treating hepatitis C genotype 1 patients with sofosbuvir/ledipasvir (SOF/LDV) treatment in Germany. We used a Markov cohort model to simulate disease progression and assess cost-effectiveness. The model calculates lifetime costs and outcomes (quality-adjusted life years, QALYs) of SOF/LDV and other strategies. Patients were stratified by treatment status (treatment-naive and treatment-experienced) and absence/presence of cirrhosis. Different treatment strategies were compared to prior standard of care. Sensitivity analyses were performed to evaluate model robustness. Base-case analyses results show that in treatment-naive non-cirrhotic patients treatment with SOF/LDV dominates the prior standard of care (is more effective and less costly). In cirrhotic patients an incremental cost-effectiveness ratio (ICER) of 3,383 €/QALY was estimated. In treatment-experienced patients ICERs were 26,426 €/QALY and 1,397 €/QALY for treatment-naive and treatment-experienced patients, respectively. Robustness of results was confirmed in sensitivity analyses. Our analysis shows that treatment with SOF/LDV is cost-effective compared to prior standard of care in all patient groups considering international costs per QALY thresholds.

  17. Greater first year effectiveness drives favorable cost-effectiveness of brand risedronate versus generic or brand alendronate: modeled Canadian analysis

    PubMed Central

    Papaioannou, A.; Thompson, M. F.; Pasquale, M. K.; Adachi, J. D.

    2016-01-01

    Summary The RisedronatE and ALendronate (REAL) study provided a unique opportunity to conduct cost-effectiveness analyses based on effectiveness data from real-world clinical practice. Using a published osteoporosis model, the researchers found risedronate to be cost-effective compared to generic or brand alendronate for the treatment of Canadian postmenopausal osteoporosis in patients aged 65 years or older. Introduction The REAL study provides robust data on the real-world performance of risedronate and alendronate. The study used these data to assess the cost-effectiveness of brand risedronate versus generic or brand alendronate for treatment of Canadian postmenopausal osteoporosis patients aged 65 years or older. Methods A previously published osteoporosis model was populated with Canadian cost and epidemiological data, and the estimated fracture risk was validated. Effectiveness data were derived from REAL and utility data from published sources. The incremental cost per quality-adjusted life-year (QALY) gained was estimated from a Canadian public payer perspective, and comprehensive sensitivity analyses were conducted. Results The base case analysis found fewer fractures and more QALYs in the risedronate cohort, providing an incremental cost per QALY gained of $3,877 for risedronate compared to generic alendronate. The results were most sensitive to treatment duration and effectiveness. Conclusions The REAL study provided a unique opportunity to conduct cost-effectiveness analyses based on effectiveness data taken from real-world clinical practice. The analysis supports the cost-effectiveness of risedronate compared to generic or brand alendronate and the use of risedronate for the treatment of osteoporotic Canadian women aged 65 years or older with a BMD T-score ≤−2.5. PMID:18008100

  18. Optimizing Experimental Design for Comparing Models of Brain Function

    PubMed Central

    Daunizeau, Jean; Preuschoff, Kerstin; Friston, Karl; Stephan, Klaas

    2011-01-01

    This article presents the first attempt to formalize the optimization of experimental design with the aim of comparing models of brain function based on neuroimaging data. We demonstrate our approach in the context of Dynamic Causal Modelling (DCM), which relates experimental manipulations to observed network dynamics (via hidden neuronal states) and provides an inference framework for selecting among candidate models. Here, we show how to optimize the sensitivity of model selection by choosing among experimental designs according to their respective model selection accuracy. Using Bayesian decision theory, we (i) derive the Laplace-Chernoff risk for model selection, (ii) disclose its relationship with classical design optimality criteria and (iii) assess its sensitivity to basic modelling assumptions. We then evaluate the approach when identifying brain networks using DCM. Monte-Carlo simulations and empirical analyses of fMRI data from a simple bimanual motor task in humans serve to demonstrate the relationship between network identification and the optimal experimental design. For example, we show that deciding whether there is a feedback connection requires shorter epoch durations, relative to asking whether there is experimentally induced change in a connection that is known to be present. Finally, we discuss limitations and potential extensions of this work. PMID:22125485

  19. Animal population dynamics: Identification of critical components

    USGS Publications Warehouse

    Emlen, J.M.; Pikitch, E.K.

    1989-01-01

    There is a growing interest in the use of population dynamics models in environmental risk assessment and the promulgation of environmental regulatory policies. Unfortunately, because of species and areal differences in the physical and biotic influences on population dynamics, such models must almost inevitably be both complex and species- or site-specific. Given the emormous variety of species and sites of potential concern, this fact presents a problem; it simply is not possible to construct models for all species and circumstances. Therefore, it is useful, before building predictive population models, to discover what input parameters are of critical importance to the desired output. This information should enable the construction of simpler and more generalizable models. As a first step, it is useful to consider population models as composed to two, partly separable classes, one comprising the purely mechanical descriptors of dynamics from given demographic parameter values, and the other describing the modulation of the demographic parameters by environmental factors (changes in physical environment, species interactions, pathogens, xenobiotic chemicals). This division permits sensitivity analyses to be run on the first of these classes, providing guidance for subsequent model simplification. We here apply such a sensitivity analysis to network models of mammalian and avian population dynamics.

  20. NEMA, a functional-structural model of nitrogen economy within wheat culms after flowering. II. Evaluation and sensitivity analysis.

    PubMed

    Bertheloot, Jessica; Wu, Qiongli; Cournède, Paul-Henry; Andrieu, Bruno

    2011-10-01

    Simulating nitrogen economy in crop plants requires formalizing the interactions between soil nitrogen availability, root nitrogen acquisition, distribution between vegetative organs and remobilization towards grains. This study evaluates and analyses the functional-structural and mechanistic model of nitrogen economy, NEMA (Nitrogen Economy Model within plant Architecture), developed for winter wheat (Triticum aestivum) after flowering. NEMA was calibrated for field plants under three nitrogen fertilization treatments at flowering. Model behaviour was investigated and sensitivity to parameter values was analysed. Nitrogen content of all photosynthetic organs and in particular nitrogen vertical distribution along the stem and remobilization patterns in response to fertilization were simulated accurately by the model, from Rubisco turnover modulated by light intercepted by the organ and a mobile nitrogen pool. This pool proved to be a reliable indicator of plant nitrogen status, allowing efficient regulation of nitrogen acquisition by roots, remobilization from vegetative organs and accumulation in grains in response to nitrogen treatments. In our simulations, root capacity to import carbon, rather than carbon availability, limited nitrogen acquisition and ultimately nitrogen accumulation in grains, while Rubisco turnover intensity mostly affected dry matter accumulation in grains. NEMA enabled interpretation of several key patterns usually observed in field conditions and the identification of plausible processes limiting for grain yield, protein content and root nitrogen acquisition that could be targets for plant breeding; however, further understanding requires more mechanistic formalization of carbon metabolism. Its strong physiological basis and its realistic behaviour support its use to gain insights into nitrogen economy after flowering.

  1. Cost-effectiveness of rivaroxaban for stroke prevention in atrial fibrillation in the Portuguese setting.

    PubMed

    Morais, João; Aguiar, Carlos; McLeod, Euan; Chatzitheofilou, Ismini; Fonseca Santos, Isabel; Pereira, Sónia

    2014-09-01

    To project the long-term cost-effectiveness of treating non-valvular atrial fibrillation (AF) patients for stroke prevention with rivaroxaban compared to warfarin in Portugal. A Markov model was used that included health and treatment states describing the management and consequences of AF and its treatment. The model's time horizon was set at a patient's lifetime and each cycle at three months. The analysis was conducted from a societal perspective and a 5% discount rate was applied to both costs and outcomes. Treatment effect data were obtained from the pivotal phase III ROCKET AF trial. The model was also populated with utility values obtained from the literature and with cost data derived from official Portuguese sources. The outcomes of the model included life-years, quality-adjusted life-years (QALYs), incremental costs, and associated incremental cost-effectiveness ratios (ICERs). Extensive sensitivity analyses were undertaken to further assess the findings of the model. As there is evidence indicating underuse and underprescription of warfarin in Portugal, an additional analysis was performed using a mixed comparator composed of no treatment, aspirin, and warfarin, which better reflects real-world prescribing in Portugal. This cost-effectiveness analysis produced an ICER of €3895/QALY for the base-case analysis (vs. warfarin) and of €6697/QALY for the real-world prescribing analysis (vs. mixed comparator). The findings were robust when tested in sensitivity analyses. The results showed that rivaroxaban may be a cost-effective alternative compared with warfarin or real-world prescribing in Portugal. Copyright © 2014 Sociedade Portuguesa de Cardiologia. Published by Elsevier España. All rights reserved.

  2. Eosinophilic esophagitis: dilate or medicate? A cost analysis model of the choice of initial therapy.

    PubMed

    Kavitt, R T; Penson, D F; Vaezi, M F

    2014-07-01

    Eosinophilic esophagitis (EoE) is an increasingly recognized clinical entity. The optimal initial treatment strategy in adults with EoE remains controversial. The aim of this study was to employ a decision analysis model to determine the less costly option between the two most commonly employed treatment strategies in EoE. We constructed a model for an index case of a patient with biopsy-proven EoE who continues to be symptomatic despite proton-pump inhibitor therapy. The following treatment strategies were included: (i) swallowed fluticasone inhaler (followed by esophagogastroduodenoscopy [EGD] with dilation if ineffective); and (ii) EGD with dilation (followed by swallowed fluticasone inhaler if ineffective). The time horizon was 1 year. The model focused on cost analysis of initial treatment strategies. The perspective of the healthcare payer was used. Sensitivity analyses were performed to assess the robustness of the model. For every patient whose symptoms improved or resolved with the strategy of fluticasone first followed by EGD, if necessary, it cost an average of $1078. Similarly, it cost an average of $1171 per patient if EGD with dilation was employed first. Sensitivity analyses indicated that initial treatment with fluticasone was the less costly strategy to improve dysphagia symptoms as long as the effectiveness of fluticasone remains at or above 0.62. Swallowed fluticasone inhaler (followed by EGD with dilation if necessary) is the more economical initial strategy when compared with EGD with dilation first. © 2012 Copyright the Authors. Journal compilation © 2012, Wiley Periodicals, Inc. and the International Society for Diseases of the Esophagus.

  3. An evidence-based framework for predicting the impact of differing autotroph-heterotroph thermal sensitivities on consumer–prey dynamics

    PubMed Central

    Yang, Zhou; Zhang, Lu; Zhu, Xuexia; Wang, Jun; Montagnes, David J S

    2016-01-01

    Increased temperature accelerates vital rates, influencing microbial population and wider ecosystem dynamics, for example, the predicted increases in cyanobacterial blooms associated with global warming. However, heterotrophic and mixotrophic protists, which are dominant grazers of microalgae, may be more thermally sensitive than autotrophs, and thus prey could be suppressed as temperature rises. Theoretical and meta-analyses have begun to address this issue, but an appropriate framework linking experimental data with theory is lacking. Using ecophysiological data to develop a novel model structure, we provide the first validation of this thermal sensitivity hypothesis: increased temperature improves the consumer's ability to control the autotrophic prey. Specifically, the model accounts for temperature effects on auto- and mixotrophs and ingestion, growth and mortality rates, using an ecologically and economically important system (cyanobacteria grazed by a mixotrophic flagellate). Once established, we show the model to be a good predictor of temperature impacts on consumer–prey dynamics by comparing simulations with microcosm observations. Then, through simulations, we indicate our conclusions remain valid, even with large changes in bottom-up factors (prey growth and carrying capacity). In conclusion, we show that rising temperature could, counterintuitively, reduce the propensity for microalgal blooms to occur and, critically, provide a novel model framework for needed, continued assessment. PMID:26684731

  4. Data to DecisionsTerminate, Tolerate, Transfer, or Treat

    DTIC Science & Technology

    2016-07-25

    and patching, a risk-based cyber - security decision model that enables a pre- dictive capability to respond to impending cyber -attacks is needed...States. This sensitive data includes business proprietary information on key programs of record and infrastructure, including government documents at...leverage nationally. The Institute for Defense Analyses (IDA) assisted the DoD CIO in formalizing a proof of concept for cyber initiatives and

  5. The propagation of wind errors through ocean wave hindcasts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holthuijsen, L.H.; Booij, N.; Bertotti, L.

    1996-08-01

    To estimate uncertainties in wave forecast and hindcasts, computations have been carried out for a location in the Mediterranean Sea using three different analyses of one historic wind field. These computations involve a systematic sensitivity analysis and estimated wind field errors. This technique enables a wave modeler to estimate such uncertainties in other forecasts and hindcasts if only one wind analysis is available.

  6. Fertility treatment in women with polycystic ovary syndrome: a decision analysis of different oral ovulation induction agents

    PubMed Central

    Jungheim, Emily S.; Odibo, Anthony O.

    2010-01-01

    Study objective To compare different oral ovulation induction agents in treating infertile women with polycystic ovary syndrome Design Decision-analytic model comparing three treatment strategies using probability estimates derived from literature review and sensitivity analyses performed on the baseline assumptions Setting Outpatient reproductive medicine and gynecology practices Patients Infertile women with polycystic ovary syndrome Interventions Metformin, clomiphene citrate, or metformin with clomiphene citrate Main Outcome Measures Live birth Results Within the baseline assumptions, combination therapy with metformin and clomiphene citrate was the preferred therapy for achieving live birth in women with polycystic ovary syndrome. Sensitivity analysis revealed the model to be robust over a wide range of probabilities. Conclusions Combination therapy with metformin and clomiphene citrate should be considered as first-line treatment for infertile women with polycystic ovary syndrome PMID:20451181

  7. Clinical Benefits, Costs, and Cost-Effectiveness of Neonatal Intensive Care in Mexico

    PubMed Central

    Profit, Jochen; Lee, Diana; Zupancic, John A.; Papile, LuAnn; Gutierrez, Cristina; Goldie, Sue J.; Gonzalez-Pier, Eduardo; Salomon, Joshua A.

    2010-01-01

    Background Neonatal intensive care improves survival, but is associated with high costs and disability amongst survivors. Recent health reform in Mexico launched a new subsidized insurance program, necessitating informed choices on the different interventions that might be covered by the program, including neonatal intensive care. The purpose of this study was to estimate the clinical outcomes, costs, and cost-effectiveness of neonatal intensive care in Mexico. Methods and Findings A cost-effectiveness analysis was conducted using a decision analytic model of health and economic outcomes following preterm birth. Model parameters governing health outcomes were estimated from Mexican vital registration and hospital discharge databases, supplemented with meta-analyses and systematic reviews from the published literature. Costs were estimated on the basis of data provided by the Ministry of Health in Mexico and World Health Organization price lists, supplemented with published studies from other countries as needed. The model estimated changes in clinical outcomes, life expectancy, disability-free life expectancy, lifetime costs, disability-adjusted life years (DALYs), and incremental cost-effectiveness ratios (ICERs) for neonatal intensive care compared to no intensive care. Uncertainty around the results was characterized using one-way sensitivity analyses and a multivariate probabilistic sensitivity analysis. In the base-case analysis, neonatal intensive care for infants born at 24–26, 27–29, and 30–33 weeks gestational age prolonged life expectancy by 28, 43, and 34 years and averted 9, 15, and 12 DALYs, at incremental costs per infant of US$11,400, US$9,500, and US$3,000, respectively, compared to an alternative of no intensive care. The ICERs of neonatal intensive care at 24–26, 27–29, and 30–33 weeks were US$1,200, US$650, and US$240, per DALY averted, respectively. The findings were robust to variation in parameter values over wide ranges in sensitivity analyses. Conclusions Incremental cost-effectiveness ratios for neonatal intensive care imply very high value for money on the basis of conventional benchmarks for cost-effectiveness analysis. Please see later in the article for the Editors' Summary PMID:21179496

  8. Cost-effectiveness Analysis of Sacubitril/Valsartan vs Enalapril in Patients With Heart Failure and Reduced Ejection Fraction.

    PubMed

    Gaziano, Thomas A; Fonarow, Gregg C; Claggett, Brian; Chan, Wing W; Deschaseaux-Voinet, Celine; Turner, Stuart J; Rouleau, Jean L; Zile, Michael R; McMurray, John J V; Solomon, Scott D

    2016-09-01

    The angiotensin receptor neprilysin inhibitor sacubitril/valsartan was associated with a reduction in cardiovascular mortality, all-cause mortality, and hospitalizations compared with enalapril. Sacubitril/valsartan has been approved for use in heart failure (HF) with reduced ejection fraction in the United States and cost has been suggested as 1 factor that will influence the use of this agent. To estimate the cost-effectiveness of sacubitril/valsartan vs enalapril in the United States. Data from US adults (mean [SD] age, 63.8 [11.5] years) with HF with reduced ejection fraction and characteristics similar to those in the PARADIGM-HF trial were used as inputs for a 2-state Markov model simulated HF. Risks of all-cause mortality and hospitalization from HF or other reasons were estimated with a 30-year time horizon. Quality of life was based on trial EQ-5D scores. Hospital costs combined Medicare and private insurance reimbursement rates; medication costs included the wholesale acquisition cost for sacubitril/valsartan and enalapril. A discount rate of 3% was used. Sensitivity analyses were performed on key inputs including: hospital costs, mortality benefit, hazard ratio for hospitalization reduction, drug costs, and quality-of-life estimates. Hospitalizations, quality-adjusted life-years (QALYs), costs, and incremental costs per QALY gained. The 2-state Markov model of US adult patients (mean age, 63.8 years) calculated that there would be 220 fewer hospital admissions per 1000 patients with HF treated with sacubitril/valsartan vs enalapril over 30 years. The incremental costs and QALYs gained with sacubitril/valsartan treatment were estimated at $35 512 and 0.78, respectively, compared with enalapril, equating to an incremental cost-effectiveness ratio (ICER) of $45 017 per QALY for the base-case. Sensitivity analyses demonstrated ICERs ranging from $35 357 to $75 301 per QALY. For eligible patients with HF with reduced ejection fraction, the Markov model calculated that sacubitril/valsartan would increase life expectancy at an ICER consistent with other high-value accepted cardiovascular interventions. Sensitivity analyses demonstrated sacubitril/valsartan would remain cost-effective vs enalapril.

  9. Physiologically Based Pharmacokinetic Modeling Suggests Limited Drug–Drug Interaction Between Clopidogrel and Dasabuvir

    PubMed Central

    Fu, W; Badri, P; Bow, DAJ; Fischer, V

    2017-01-01

    Dasabuvir, a nonnucleoside NS5B polymerase inhibitor, is a sensitive substrate of cytochrome P450 (CYP) 2C8 with a potential for drug–drug interaction (DDI) with clopidogrel. A physiologically based pharmacokinetic (PBPK) model was developed for dasabuvir to evaluate the DDI potential with clopidogrel, the acyl‐β‐D glucuronide metabolite of which has been reported as a strong mechanism‐based inhibitor of CYP2C8 based on an interaction with repaglinide. In addition, the PBPK model for clopidogrel and its metabolite were updated with additional in vitro data. Sensitivity analyses using these PBPK models suggested that CYP2C8 inhibition by clopidogrel acyl‐β‐D glucuronide may not be as potent as previously suggested. The dasabuvir and updated clopidogrel PBPK models predict a moderate increase of 1.5–1.9‐fold for Cmax and 1.9–2.8‐fold for AUC of dasabuvir when coadministered with clopidogrel. While the PBPK results suggest there is a potential for DDI between dasabuvir and clopidogrel, the magnitude is not expected to be clinically relevant. PMID:28411400

  10. The Anxiety Sensitivity Index--Revised: Confirmatory Factor Analyses, Structural Invariance in Caucasian and African American Samples, and Score Reliability and Validity

    ERIC Educational Resources Information Center

    Arnau, Randolph C.; Broman-Fulks, Joshua J.; Green, Bradley A.; Berman, Mitchell E.

    2009-01-01

    The most commonly used measure of anxiety sensitivity is the 36-item Anxiety Sensitivity Index--Revised (ASI-R). Exploratory factor analyses have produced several different factors structures for the ASI-R, but an acceptable fit using confirmatory factor analytic approaches has only been found for a 21-item version of the instrument. We evaluated…

  11. Evaluation of NOx Emissions and Modeling

    NASA Astrophysics Data System (ADS)

    Henderson, B. H.; Simon, H. A.; Timin, B.; Dolwick, P. D.; Owen, R. C.; Eyth, A.; Foley, K.; Toro, C.; Baker, K. R.

    2017-12-01

    Studies focusing on ambient measurements of NOy have concluded that NOx emissions are overestimated and some have attributed the error to the onroad mobile sector. We investigate this conclusion to identify the cause of observed bias. First, we compare DISCOVER-AQ Baltimore ambient measurements to fine-scale modeling with NOy tagged by sector. Sector-based relationships with bias are present, but these are sensitive to simulated vertical mixing. This is evident both in sensitivity to mixing parameterization and the seasonal patterns of bias. We also evaluate observation-based indicators, like CO:NOy ratios, that are commonly used to diagnose emissions inventories. Second, we examine the sensitivity of predicted NOx and NOy to temporal allocation of emissions. We investigate alternative temporal allocations for EGUs without CEMS, on-road mobile, and several non-road categories. These results show some location-specific sensitivity and will lead to some improved temporal allocations. Third, near-road studies have inherently fewer confounding variables, and have been examined for more direct evaluation of emissions and dispersion models. From 2008-2011, the EPA and FHWA conducted near-road studies in Las Vegas and Detroit. These measurements are used to more directly evaluate the emissions and dispersion using site-specific traffic data. In addition, the site-specific emissions are being compared to the emissions used in larger-scale photochemical modeling to identify key discrepancies. These efforts are part of a larger coordinated effort by EPA scientist to ensure the highest quality in emissions and model processes. We look forward to sharing the state of these analyses and expected updates.

  12. pyres: a Python wrapper for electrical resistivity modeling with R2

    NASA Astrophysics Data System (ADS)

    Befus, Kevin M.

    2018-04-01

    A Python package, pyres, was written to handle common as well as specialized input and output tasks for the R2 electrical resistivity (ER) modeling program. Input steps including handling field data, creating quadrilateral or triangular meshes, and data filtering allow repeatable and flexible ER modeling within a programming environment. pyres includes non-trivial routines and functions for locating and constraining specific known or separately-parameterized regions in both quadrilateral and triangular meshes. Three basic examples of how to run forward and inverse models with pyres are provided. The importance of testing mesh convergence and model sensitivity are also addressed with higher-level examples that show how pyres can facilitate future research-grade ER analyses.

  13. Strategic enterprise resource planning in a health-care system using a multicriteria decision-making model.

    PubMed

    Lee, Chang Won; Kwak, N K

    2011-04-01

    This paper deals with strategic enterprise resource planning (ERP) in a health-care system using a multicriteria decision-making (MCDM) model. The model is developed and analyzed on the basis of the data obtained from a leading patient-oriented provider of health-care services in Korea. Goal criteria and priorities are identified and established via the analytic hierarchy process (AHP). Goal programming (GP) is utilized to derive satisfying solutions for designing, evaluating, and implementing an ERP. The model results are evaluated and sensitivity analyses are conducted in an effort to enhance the model applicability. The case study provides management with valuable insights for planning and controlling health-care activities and services.

  14. Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations

    NASA Technical Reports Server (NTRS)

    Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.

    2017-01-01

    A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.

  15. Village power options

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lilienthal, P.

    1997-12-01

    This paper describes three different computer codes which have been written to model village power applications. The reasons which have driven the development of these codes include: the existance of limited field data; diverse applications can be modeled; models allow cost and performance comparisons; simulations generate insights into cost structures. The models which are discussed are: Hybrid2, a public code which provides detailed engineering simulations to analyze the performance of a particular configuration; HOMER - the hybrid optimization model for electric renewables - which provides economic screening for sensitivity analyses; and VIPOR the village power model - which is amore » network optimization model for comparing mini-grids to individual systems. Examples of the output of these codes are presented for specific applications.« less

  16. The influence of mothers' and fathers' sensitivity in the first year of life on children's cognitive outcomes at 18 and 36 months.

    PubMed

    Malmberg, L-E; Lewis, S; West, A; Murray, E; Sylva, K; Stein, A

    2016-01-01

    There has been increasing interest in the relative effects of mothers' and fathers' interactions with their infants on later development. However to date there has been little work on children's cognitive outcomes. We examined the relative influence of fathers' and mothers' sensitivity during interactions with their children at the end of the child's first year (10-12 months, n = 97), on child general cognitive development at 18 months and language at 36 months. Both parents' sensitivity was associated with cognitive and language outcomes in univariate analyses. Mothers' sensitivity, however, appeared to be associated with family socio-demographic factors to a greater extent that fathers' sensitivity. Using path modelling the effect of paternal sensitivity on general cognitive development at 18 months and language at 36 months was significantly greater than the effect of maternal sensitivity, when controlling for socio-demographic background. In relation to language at 36 months, there was some evidence that sensitivity of one parent buffered the effect of lower sensitivity of the other parent. These findings suggest that parental sensitivity can play an important role in children's cognitive and language development, and that higher sensitivity of one parent can compensate for the lower sensitivity of the other parent. Replication of these findings, however, is required in larger samples. © 2015 John Wiley & Sons Ltd.

  17. Ocean acidification over the next three centuries using a simple global climate carbon-cycle model: projections and sensitivities

    DOE PAGES

    Hartin, Corinne A.; Bond-Lamberty, Benjamin; Patel, Pralit; ...

    2016-08-01

    Continued oceanic uptake of anthropogenic CO 2 is projected to significantly alter the chemistry of the upper oceans over the next three centuries, with potentially serious consequences for marine ecosystems. Relatively few models have the capability to make projections of ocean acidification, limiting our ability to assess the impacts and probabilities of ocean changes. In this study we examine the ability of Hector v1.1, a reduced-form global model, to project changes in the upper ocean carbonate system over the next three centuries, and quantify the model's sensitivity to parametric inputs. Hector is run under prescribed emission pathways from the Representativemore » Concentration Pathways (RCPs) and compared to both observations and a suite of Coupled Model Intercomparison (CMIP5) model outputs. Current observations confirm that ocean acidification is already taking place, and CMIP5 models project significant changes occurring to 2300. Hector is consistent with the observational record within both the high- (> 55°) and low-latitude oceans (< 55°). The model projects low-latitude surface ocean pH to decrease from preindustrial levels of 8.17 to 7.77 in 2100, and to 7.50 in 2300; aragonite saturation levels (Ω Ar) decrease from 4.1 units to 2.2 in 2100 and 1.4 in 2300 under RCP 8.5. These magnitudes and trends of ocean acidification within Hector are largely consistent with the CMIP5 model outputs, although we identify some small biases within Hector's carbonate system. Of the parameters tested, changes in [H +] are most sensitive to parameters that directly affect atmospheric CO 2 concentrations – Q 10 (terrestrial respiration temperature response) as well as changes in ocean circulation, while changes in Ω Ar saturation levels are sensitive to changes in ocean salinity and Q 10. We conclude that Hector is a robust tool well suited for rapid ocean acidification projections and sensitivity analyses, and it is capable of emulating both current observations and large-scale climate models under multiple emission pathways.« less

  18. Ocean acidification over the next three centuries using a simple global climate carbon-cycle model: projections and sensitivities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartin, Corinne A.; Bond-Lamberty, Benjamin; Patel, Pralit

    Continued oceanic uptake of anthropogenic CO 2 is projected to significantly alter the chemistry of the upper oceans over the next three centuries, with potentially serious consequences for marine ecosystems. Relatively few models have the capability to make projections of ocean acidification, limiting our ability to assess the impacts and probabilities of ocean changes. In this study we examine the ability of Hector v1.1, a reduced-form global model, to project changes in the upper ocean carbonate system over the next three centuries, and quantify the model's sensitivity to parametric inputs. Hector is run under prescribed emission pathways from the Representativemore » Concentration Pathways (RCPs) and compared to both observations and a suite of Coupled Model Intercomparison (CMIP5) model outputs. Current observations confirm that ocean acidification is already taking place, and CMIP5 models project significant changes occurring to 2300. Hector is consistent with the observational record within both the high- (> 55°) and low-latitude oceans (< 55°). The model projects low-latitude surface ocean pH to decrease from preindustrial levels of 8.17 to 7.77 in 2100, and to 7.50 in 2300; aragonite saturation levels (Ω Ar) decrease from 4.1 units to 2.2 in 2100 and 1.4 in 2300 under RCP 8.5. These magnitudes and trends of ocean acidification within Hector are largely consistent with the CMIP5 model outputs, although we identify some small biases within Hector's carbonate system. Of the parameters tested, changes in [H +] are most sensitive to parameters that directly affect atmospheric CO 2 concentrations – Q 10 (terrestrial respiration temperature response) as well as changes in ocean circulation, while changes in Ω Ar saturation levels are sensitive to changes in ocean salinity and Q 10. We conclude that Hector is a robust tool well suited for rapid ocean acidification projections and sensitivity analyses, and it is capable of emulating both current observations and large-scale climate models under multiple emission pathways.« less

  19. Ocean acidification over the next three centuries using a simple global climate carbon-cycle model: projections and sensitivities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartin, Corinne A.; Bond-Lamberty, Benjamin; Patel, Pralit

    Continued oceanic uptake of anthropogenic CO 2 is projected to significantly alter the chemistry of the upper oceans over the next three centuries, with potentially serious consequences for marine ecosystems. Relatively few models have the capability to make projections of ocean acidification, limiting our ability to assess the impacts and probabilities of ocean changes. In this study we examine the ability of Hector v1.1, a reduced-form global model, to project changes in the upper ocean carbonate system over the next three centuries, and quantify the model's sensitivity to parametric inputs. Hector is run under prescribed emission pathways from the Representativemore » Concentration Pathways (RCPs) and compared to both observations and a suite of Coupled Model Intercomparison (CMIP5) model outputs. Current observations confirm that ocean acidification is already taking place, and CMIP5 models project significant changes occurring to 2300. Hector is consistent with the observational record within both the high- (> 55°) and low-latitude oceans (< 55°). The model projects low-latitude surface ocean pH to decrease from preindustrial levels of 8.17 to 7.77 in 2100, and to 7.50 in 2300; aragonite saturation levels (Ω Ar) decrease from 4.1 units to 2.2 in 2100 and 1.4 in 2300 under RCP 8.5. These magnitudes and trends of ocean acidification within Hector are largely consistent with the CMIP5 model outputs, although we identify some small biases within Hector's carbonate system. Of the parameters tested, changes in [H +] are most sensitive to parameters that directly affect atmospheric CO 2 concentrations – Q 10 (terrestrial respiration temperature response) as well as changes in ocean circulation, while changes in Ω Ar saturation levels are sensitive to changes in ocean salinity and Q 10. We conclude that Hector is a robust tool well suited for rapid ocean acidification projections and sensitivity analyses, and it is capable of emulating both current observations and large-scale climate models under multiple emission pathways.« less

  20. Can we use high precision metal isotope analysis to improve our understanding of cancer?

    PubMed

    Larner, Fiona

    2016-01-01

    High precision natural isotope analyses are widely used in geosciences to trace elemental transport pathways. The use of this analytical tool is increasing in nutritional and disease-related research. In recent months, a number of groups have shown the potential this technique has in providing new observations for various cancers when applied to trace metal metabolism. The deconvolution of isotopic signatures, however, relies on mathematical models and geochemical data, which are not representative of the system under investigation. In addition to relevant biochemical studies of protein-metal isotopic interactions, technological development both in terms of sample throughput and detection sensitivity of these elements is now needed to translate this novel approach into a mainstream analytical tool. Following this, essential background healthy population studies must be performed, alongside observational, cross-sectional disease-based studies. Only then can the sensitivity and specificity of isotopic analyses be tested alongside currently employed methods, and important questions such as the influence of cancer heterogeneity and disease stage on isotopic signatures be addressed.

  1. Economic evaluation of home-based telebehavioural health care compared to in-person treatment delivery for depression.

    PubMed

    Bounthavong, Mark; Pruitt, Larry D; Smolenski, Derek J; Gahm, Gregory A; Bansal, Aasthaa; Hansen, Ryan N

    2018-02-01

    Introduction Home-based telebehavioural healthcare improves access to mental health care for patients restricted by travel burden. However, there is limited evidence assessing the economic value of home-based telebehavioural health care compared to in-person care. We sought to compare the economic impact of home-based telebehavioural health care and in-person care for depression among current and former US service members. Methods We performed trial-based cost-minimisation and cost-utility analyses to assess the economic impact of home-based telebehavioural health care versus in-person behavioural care for depression. Our analyses focused on the payer perspective (Department of Defense and Department of Veterans Affairs) at three months. We also performed a scenario analysis where all patients possessed video-conferencing technology that was approved by these agencies. The cost-utility analysis evaluated the impact of different depression categories on the incremental cost-effectiveness ratio. One-way and probabilistic sensitivity analyses were performed to test the robustness of the model assumptions. Results In the base case analysis the total direct cost of home-based telebehavioural health care was higher than in-person care (US$71,974 versus US$20,322). Assuming that patients possessed government-approved video-conferencing technology, home-based telebehavioural health care was less costly compared to in-person care (US$19,177 versus US$20,322). In one-way sensitivity analyses, the proportion of patients possessing personal computers was a major driver of direct costs. In the cost-utility analysis, home-based telebehavioural health care was dominant when patients possessed video-conferencing technology. Results from probabilistic sensitivity analyses did not differ substantially from base case results. Discussion Home-based telebehavioural health care is dependent on the cost of supplying video-conferencing technology to patients but offers the opportunity to increase access to care. Health-care policies centred on implementation of home-based telebehavioural health care should ensure that these technologies are able to be successfully deployed on patients' existing technology.

  2. Difficult Decisions Made Easier

    NASA Technical Reports Server (NTRS)

    2006-01-01

    NASA missions are extremely complex and prone to sudden, catastrophic failure if equipment falters or if an unforeseen event occurs. For these reasons, NASA trains to expect the unexpected. It tests its equipment and systems in extreme conditions, and it develops risk-analysis tests to foresee any possible problems. The Space Agency recently worked with an industry partner to develop reliability analysis software capable of modeling complex, highly dynamic systems, taking into account variations in input parameters and the evolution of the system over the course of a mission. The goal of this research was multifold. It included performance and risk analyses of complex, multiphase missions, like the insertion of the Mars Reconnaissance Orbiter; reliability analyses of systems with redundant and/or repairable components; optimization analyses of system configurations with respect to cost and reliability; and sensitivity analyses to identify optimal areas for uncertainty reduction or performance enhancement.

  3. Sensitivity analysis and uncertainty estimation in ash concentration simulations and tephra deposit daily forecasted at Mt. Etna, in Italy

    NASA Astrophysics Data System (ADS)

    Prestifilippo, Michele; Scollo, Simona; Tarantola, Stefano

    2015-04-01

    The uncertainty in volcanic ash forecasts may depend on our knowledge of the model input parameters and our capability to represent the dynamic of an incoming eruption. Forecasts help governments to reduce risks associated with volcanic eruptions and for this reason different kinds of analysis that help to understand the effect that each input parameter has on model outputs are necessary. We present an iterative approach based on the sequential combination of sensitivity analysis, parameter estimation procedure and Monte Carlo-based uncertainty analysis, applied to the lagrangian volcanic ash dispersal model PUFF. We modify the main input parameters as the total mass, the total grain-size distribution, the plume thickness, the shape of the eruption column, the sedimentation models and the diffusion coefficient, perform thousands of simulations and analyze the results. The study is carried out on two different Etna scenarios: the sub-plinian eruption of 22 July 1998 that formed an eruption column rising 12 km above sea level and lasted some minutes and the lava fountain eruption having features similar to the 2011-2013 events that produced eruption column high up to several kilometers above sea level and lasted some hours. Sensitivity analyses and uncertainty estimation results help us to address the measurements that volcanologists should perform during volcanic crisis to reduce the model uncertainty.

  4. The role of large-scale eddies in the climate equilibrium. Part 2: Variable static stability

    NASA Technical Reports Server (NTRS)

    Zhou, Shuntai; Stone, Peter H.

    1993-01-01

    Lorenz's two-level model on a sphere is used to investigate how the results of Part 1 are modified when the interaction of the vertical eddy heat flux and static stability is included. In general, the climate state does not depend very much on whether or not this interaction is included, because the poleward eddy heat transport dominates the eddy forcing of mean temperature and wind fields. However, the climatic sensitivity is significantly affected. Compared to two-level model results with fixed static stability, the poleward eddy heat flux is less sensitive to the meridional temperature gradient and the gradient is more sensitive to the forcing. For example, the logarithmic derivative of the eddy flux with respect to the gradient has a slope that is reduced from approximately 15 on a beta-plane with fixed static stability and approximately 6 on a sphere with fixed static stability, to approximately 3 to 4 in the present model. This last result is more in line with analyses from observations. The present model also has a stronger baroclinic adjustment than that in Part 1, more like that in two-level beta-plane models with fixed static stability, that is, the midlatitude isentropic slope is very insensitive to the forcing, the diabatic heating, and the friction, unless the forcing is very weak.

  5. Cost-effectiveness of possible future smoking cessation strategies in Hungary: results from the EQUIPTMOD.

    PubMed

    Németh, Bertalan; Józwiak-Hagymásy, Judit; Kovács, Gábor; Kovács, Attila; Demjén, Tibor; Huber, Manuel B; Cheung, Kei-Long; Coyle, Kathryn; Lester-George, Adam; Pokhrel, Subhash; Vokó, Zoltán

    2018-01-25

    To evaluate potential health and economic returns from implementing smoking cessation interventions in Hungary. The EQUIPTMOD, a Markov-based economic model, was used to assess the cost-effectiveness of three implementation scenarios: (a) introducing a social marketing campaign; (b) doubling the reach of existing group-based behavioural support therapies and proactive telephone support; and (c) a combination of the two scenarios. All three scenarios were compared with current practice. The scenarios were chosen as feasible options available for Hungary based on the outcome of interviews with local stakeholders. Life-time costs and quality-adjusted life years (QALYs) were calculated from a health-care perspective. The analyses used various return on investment (ROI) estimates, including incremental cost-effectiveness ratios (ICERs), to compare the scenarios. Probabilistic sensitivity analyses assessed the extent to which the estimated mean ICERs were sensitive to the model input values. Introducing a social marketing campaign resulted in an increase of 0.3014 additional quitters per 1 000 smokers, translating to health-care cost-savings of €0.6495 per smoker compared with current practice. When the value of QALY gains was considered, cost-savings increased to €14.1598 per smoker. Doubling the reach of existing group-based behavioural support therapies and proactive telephone support resulted in health-care savings of €0.2539 per smoker (€3.9620 with the value of QALY gains), compared with current practice. The respective figures for the combined scenario were €0.8960 and €18.0062. Results were sensitive to model input values. According to the EQUIPTMOD modelling tool, it would be cost-effective for the Hungarian authorities introduce a social marketing campaign and double the reach of existing group-based behavioural support therapies and proactive telephone support. Such policies would more than pay for themselves in the long term. © 2018 The Authors. Addiction published by John Wiley & Sons Ltd on behalf of Society for the Study of Addiction.

  6. Sensitivity of Multiangle, Multispectral Polarimetric Remote Sensing Over Open Oceans to Water-Leaving Radiance: Analyses of RSP Data Acquired During the MILAGRO Campaign

    NASA Technical Reports Server (NTRS)

    Chowdhary, Jacek; Cairns, Brian; Waquet, Fabien; Knobelspiesse, Kirk; Ottaviani, Matteo; Redemann, Jens; Travis, Larry; Mishchenko, Michael

    2012-01-01

    For remote sensing of aerosol over the ocean, there is a contribution from light scattered underwater. The brightness and spectrum of this light depends on the biomass content of the ocean, such that variations in the color of the ocean can be observed even from space. Rayleigh scattering by pure sea water, and Rayleigh-Gans type scattering by plankton, causes this light to be polarized with a distinctive angular distribution. To study the contribution of this underwater light polarization to multiangle, multispectral observations of polarized reflectance over ocean, we previously developed a hydrosol model for use in underwater light scattering computations that produces realistic variations of the ocean color and the underwater light polarization signature of pure sea water. In this work we review this hydrosol model, include a correction for the spectrum of the particulate scattering coefficient and backscattering efficiency, and discuss its sensitivity to variations in colored dissolved organic matter (CDOM) and in the scattering function of marine particulates. We then apply this model to measurements of total and polarized reflectance that were acquired over open ocean during the MILAGRO field campaign by the airborne Research Scanning Polarimeter (RSP). Analyses show that our hydrosol model faithfully reproduces the water-leaving contributions to RSP reflectance, and that the sensitivity of these contributions to Chlorophyll a concentration [Chl] in the ocean varies with the azimuth, height, and wavelength of observations. We also show that the impact of variations in CDOM on the polarized reflectance observed by the RSP at low altitude is comparable to or much less than the standard error of this reflectance whereas their effects in total reflectance may be substantial (i.e. up to >30%). Finally, we extend our study of polarized reflectance variations with [Chl] and CDOM to include results for simulated spaceborne observations.

  7. Scattering of S waves diffracted at the core-mantle boundary: forward modelling

    NASA Astrophysics Data System (ADS)

    Emery, Valérie; Maupin, Valérie; Nataf, Henri-Claude

    1999-11-01

    The lowermost 200-300 km of the Earth's mantle, known as the D'' layer, is an extremely complex and heterogeneous region where transfer processes between the core and the mantle take place. Diffracted S waves propagate over large distances and are very sensitive to the velocity structure of this region. Strong variations of ampli-tudes and waveforms are observed on recordings from networks of broad-band seismic stations. We perform forward modelling of diffracted S waves in laterally heterogeneous structures in order to analyse whether or not these observations can be related to lateral inhomogeneities in D''. We combine the diffraction due to the core and the scattering due to small-scale volumetric heterogeneities (10-100 km) by coupling single scattering (Born approximation) with the Langer approximation, which describes Sdiff wave propagation. The influence on the direct as well as on the scattered wavefields of the CMB as well as of possible tunnelling in the core or in D'' is fully accounted for. The SH and the SV components of the diffracted waves are analysed, as well as their coupling. The modelling is applied in heterogeneous models with different geometries: isolated heterogeneities, vertical cylinders, horizontal inhomogeneities and random media. Amplitudes of scattered waves are weak and only velocity perturbations of the order of 10 per cent over a volume of 240 x 240 x 300 km3 produce visible effects on seismograms. The two polarizations of Sdiff have different radial sensitivities, the SH components being more sensitive to heterogeneities closer to the CMB. However, we do not observe significant time-shifts between the two components similar to those produced by anisotropy. The long-period Sdiff have a poor lateral resolution and average the velocity perturbations in their Fresnel zone. Random small-scale heterogeneities with +/- 10 per cent velocity contrast in the layer therefore have little effect on Sdiff, in contrast to their effect on PKIKP.

  8. The cost-effectiveness of dulaglutide versus liraglutide for the treatment of type 2 diabetes mellitus in Spain in patients with BMI ≥30 kg/m2.

    PubMed

    Dilla, Tatiana; Alexiou, Dimitra; Chatzitheofilou, Ismini; Ayyub, Ruba; Lowin, Julia; Norrbacka, Kirsi

    2017-05-01

    Dulaglutide 1.5 mg once weekly is a novel glucagon-like peptide 1 (GLP-1) receptor agonist, for the treatment of type two diabetes mellitus (T2DM). The objective was to estimate the cost-effectiveness of dulaglutide once weekly vs liraglutide 1.8 mg once daily for the treatment of T2DM in Spain in patients with a BMI ≥30 kg/m 2 . The IMS CORE Diabetes Model (CDM) was used to estimate costs and outcomes from the perspective of Spanish National Health System, capturing relevant direct medical costs over a lifetime time horizon. Comparative safety and efficacy data were derived from direct comparison of dulaglutide 1.5 mg vs liraglutide 1.8 mg from the AWARD-6 trial in patients with a body mass index (BMI) ≥30 kg/m 2 . All patients were assumed to remain on treatment for 2 years before switching treatment to basal insulin at a daily dose of 40 IU. One-way sensitivity analyses (OWSA) and probabilistic sensitivity analyses (PSA) were conducted to explore the sensitivity of the model to plausible variations in key parameters and uncertainty of model inputs. Under base case assumptions, dulaglutide 1.5 mg was less costly and more effective vs liraglutide 1.8 mg (total lifetime costs €108,489 vs €109,653; total QALYS 10.281 vs 10.259). OWSA demonstrated that dulaglutide 1.5 mg remained dominant given plausible variations in key input parameters. Results of the PSA were consistent with base case results. Primary limitations of the analysis are common to other cost-effectiveness analyses of chronic diseases like T2DM and include the extrapolation of short-term clinical data to the lifetime time horizon and uncertainty around optimum treatment durations. The model found that dulaglutide 1.5 mg was more effective and less costly than liraglutide 1.8 mg for the treatment of T2DM in Spain. Findings were robust to plausible variations in inputs. Based on these results, dulaglutide may result in cost savings to the Spanish National Health System.

  9. Uncertainty and sensitivity analysis for photovoltaic system modeling.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Clifford W.; Pohl, Andrew Phillip; Jordan, Dirk

    2013-12-01

    We report an uncertainty and sensitivity analysis for modeling DC energy from photovoltaic systems. We consider two systems, each comprised of a single module using either crystalline silicon or CdTe cells, and located either at Albuquerque, NM, or Golden, CO. Output from a PV system is predicted by a sequence of models. Uncertainty in the output of each model is quantified by empirical distributions of each model's residuals. We sample these distributions to propagate uncertainty through the sequence of models to obtain an empirical distribution for each PV system's output. We considered models that: (1) translate measured global horizontal, directmore » and global diffuse irradiance to plane-of-array irradiance; (2) estimate effective irradiance from plane-of-array irradiance; (3) predict cell temperature; and (4) estimate DC voltage, current and power. We found that the uncertainty in PV system output to be relatively small, on the order of 1% for daily energy. Four alternative models were considered for the POA irradiance modeling step; we did not find the choice of one of these models to be of great significance. However, we observed that the POA irradiance model introduced a bias of upwards of 5% of daily energy which translates directly to a systematic difference in predicted energy. Sensitivity analyses relate uncertainty in the PV system output to uncertainty arising from each model. We found that the residuals arising from the POA irradiance and the effective irradiance models to be the dominant contributors to residuals for daily energy, for either technology or location considered. This analysis indicates that efforts to reduce the uncertainty in PV system output should focus on improvements to the POA and effective irradiance models.« less

  10. A hybrid-system model of the coagulation cascade: simulation, sensitivity, and validation.

    PubMed

    Makin, Joseph G; Narayanan, Srini

    2013-10-01

    The process of human blood clotting involves a complex interaction of continuous-time/continuous-state processes and discrete-event/discrete-state phenomena, where the former comprise the various chemical rate equations and the latter comprise both threshold-limited behaviors and binary states (presence/absence of a chemical). Whereas previous blood-clotting models used only continuous dynamics and perforce addressed only portions of the coagulation cascade, we capture both continuous and discrete aspects by modeling it as a hybrid dynamical system. The model was implemented as a hybrid Petri net, a graphical modeling language that extends ordinary Petri nets to cover continuous quantities and continuous-time flows. The primary focus is simulation: (1) fidelity to the clinical data in terms of clotting-factor concentrations and elapsed time; (2) reproduction of known clotting pathologies; and (3) fine-grained predictions which may be used to refine clinical understanding of blood clotting. Next we examine sensitivity to rate-constant perturbation. Finally, we propose a method for titrating between reliance on the model and on prior clinical knowledge. For simplicity, we confine these last two analyses to a critical purely-continuous subsystem of the model.

  11. Review of Statistical Methods for Analysing Healthcare Resources and Costs

    PubMed Central

    Mihaylova, Borislava; Briggs, Andrew; O'Hagan, Anthony; Thompson, Simon G

    2011-01-01

    We review statistical methods for analysing healthcare resource use and costs, their ability to address skewness, excess zeros, multimodality and heavy right tails, and their ease for general use. We aim to provide guidance on analysing resource use and costs focusing on randomised trials, although methods often have wider applicability. Twelve broad categories of methods were identified: (I) methods based on the normal distribution, (II) methods following transformation of data, (III) single-distribution generalized linear models (GLMs), (IV) parametric models based on skewed distributions outside the GLM family, (V) models based on mixtures of parametric distributions, (VI) two (or multi)-part and Tobit models, (VII) survival methods, (VIII) non-parametric methods, (IX) methods based on truncation or trimming of data, (X) data components models, (XI) methods based on averaging across models, and (XII) Markov chain methods. Based on this review, our recommendations are that, first, simple methods are preferred in large samples where the near-normality of sample means is assured. Second, in somewhat smaller samples, relatively simple methods, able to deal with one or two of above data characteristics, may be preferable but checking sensitivity to assumptions is necessary. Finally, some more complex methods hold promise, but are relatively untried; their implementation requires substantial expertise and they are not currently recommended for wider applied work. Copyright © 2010 John Wiley & Sons, Ltd. PMID:20799344

  12. Gata3 hypermethylation and Foxp3 hypomethylation are associated with sustained protection and bystander effect following epicutaneous immunotherapy in peanut-sensitized mice.

    PubMed

    Mondoulet, Lucie; Dioszeghy, Vincent; Busato, Florence; Plaquet, Camille; Dhelft, Véronique; Bethune, Kevin; Leclere, Laurence; Daviaud, Christian; Ligouis, Mélanie; Sampson, Hugh; Dupont, Christophe; Tost, Jörg

    2018-05-19

    Epicutaneous immunotherapy (EPIT) is a promising method for treating food allergies. In animal models, EPIT induces sustained unresponsiveness and prevents further sensitization mediated by Tregs. Here, we elucidate the mechanisms underlying the therapeutic effect of EPIT, by characterizing the kinetics of DNA methylation changes in sorted cells from spleen and blood and by evaluating its persistence and bystander effect compared to oral immunotherapy (OIT). BALB/c mice orally sensitized to peanut proteins (PPE) were treated by EPIT using a PPE-patch or by PPE-OIT. Another set of peanut-sensitized mice treated by EPIT or OIT were sacrificed following a protocol of sensitization to OVA. DNA methylation was analysed during immunotherapy and 8 weeks after the end of treatment in sorted cells from spleen and blood by pyrosequencing. Humoral and cellular responses were measured during and after immunotherapy. Analyses showed a significant hypermethylation of the Gata3 promoter detectable only in Th2 cells for EPIT from the 4 th week and a significant hypomethylation of the Foxp3 promoter in CD62L + Tregs, which was sustained only for EPIT. In addition, mice treated with EPIT were protected from subsequent sensitization and maintained the epigenetic signature characteristic for EPIT. Our study demonstrates that EPIT leads to a unique and stable epigenetic signature in specific T cell compartments with down regulation of Th2 key regulators and upregulation of Treg transcription factors, likely explaining the sustainability of protection and the observed bystander effect. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  13. Sensitivity Analysis of a Lagrangian Sea Ice Model

    NASA Astrophysics Data System (ADS)

    Rabatel, Matthias; Rampal, Pierre; Bertino, Laurent; Carrassi, Alberto; Jones, Christopher K. R. T.

    2017-04-01

    Large changes in the Arctic sea ice have been observed in the last decades in terms of the ice thickness, extension and drift. Understanding the mechanisms behind these changes is of paramount importance to enhance our modeling and forecasting capabilities. For 40 years, models have been developed to describe the non-linear dynamical response of the sea ice to a number of external and internal factors. Nevertheless, there still exists large deviations between predictions and observations. There are related to incorrect descriptions of the sea ice response and/or to the uncertainties about the different sources of information: parameters, initial and boundary conditions and external forcing. Data assimilation (DA) methods are used to combine observations with models, and there is nowadays an increasing interest of DA for sea-ice models and observations. We consider here the state-of-the art sea-ice model, neXtSIM te{Rampal2016a}, which is based on a time-varying Lagrangian mesh and makes use of the Elasto-Brittle rheology. Our ultimate goal is designing appropriate DA scheme for such a modelling facility. This contribution reports about the first milestone along this line: a sensitivity analysis in order to quantify forecast error to guide model development and to set basis for further Lagrangian DA methods. Specific features of the sea-ice dynamics in relation to the wind are thus analysed. Virtual buoys are deployed across the Arctic domain and their trajectories of motion are analysed. The simulated trajectories are also compared to real buoys trajectories observed. The model response is also compared with that one from a model version not including internal forcing to highlight the role of the rheology. Conclusions and perspectives for the general DA implementation are also discussed. \\bibitem{Rampal2016a} P. Rampal, S. Bouillon, E. Ólason, and M. Morlighem. ne{X}t{SIM}: a new {L}agrangian sea ice model. The Cryosphere, 10 (3): 1055-1073, 2016.

  14. Sensitivity analysis of the near-road dispersion model RLINE - An evaluation at Detroit, Michigan

    NASA Astrophysics Data System (ADS)

    Milando, Chad W.; Batterman, Stuart A.

    2018-05-01

    The development of accurate and appropriate exposure metrics for health effect studies of traffic-related air pollutants (TRAPs) remains challenging and important given that traffic has become the dominant urban exposure source and that exposure estimates can affect estimates of associated health risk. Exposure estimates obtained using dispersion models can overcome many of the limitations of monitoring data, and such estimates have been used in several recent health studies. This study examines the sensitivity of exposure estimates produced by dispersion models to meteorological, emission and traffic allocation inputs, focusing on applications to health studies examining near-road exposures to TRAP. Daily average concentrations of CO and NOx predicted using the Research Line source model (RLINE) and a spatially and temporally resolved mobile source emissions inventory are compared to ambient measurements at near-road monitoring sites in Detroit, MI, and are used to assess the potential for exposure measurement error in cohort and population-based studies. Sensitivity of exposure estimates is assessed by comparing nominal and alternative model inputs using statistical performance evaluation metrics and three sets of receptors. The analysis shows considerable sensitivity to meteorological inputs; generally the best performance was obtained using data specific to each monitoring site. An updated emission factor database provided some improvement, particularly at near-road sites, while the use of site-specific diurnal traffic allocations did not improve performance compared to simpler default profiles. Overall, this study highlights the need for appropriate inputs, especially meteorological inputs, to dispersion models aimed at estimating near-road concentrations of TRAPs. It also highlights the potential for systematic biases that might affect analyses that use concentration predictions as exposure measures in health studies.

  15. Sleep duration and sleep quality are associated differently with alterations of glucose homeostasis.

    PubMed

    Byberg, S; Hansen, A-L S; Christensen, D L; Vistisen, D; Aadahl, M; Linneberg, A; Witte, D R

    2012-09-01

    Studies suggest that inadequate sleep duration and poor sleep quality increase the risk of impaired glucose regulation and diabetes. However, associations with specific markers of glucose homeostasis are less well explained. The objective of this study was to explore possible associations of sleep duration and sleep quality with markers of glucose homeostasis and glucose tolerance status in a healthy population-based study sample. The study comprised 771 participants from the Danish, population-based cross-sectional 'Health2008' study. Sleep duration and sleep quality were measured by self-report. Markers of glucose homeostasis were derived from a 3-point oral glucose tolerance test and included fasting plasma glucose, 2-h plasma glucose, HbA(1c), two measures of insulin sensitivity (the insulin sensitivity index(0,120) and homeostasis model assessment of insulin sensitivity), the homeostasis model assessment of β-cell function and glucose tolerance status. Associations of sleep duration and sleep quality with markers of glucose homeostasis and tolerance were analysed by multiple linear and logistic regression. A 1-h increment in sleep duration was associated with a 0.3 mmol/mol (0.3%) decrement in HbA(1c) and a 25% reduction in the risk of having impaired glucose regulation. Further, a 1-point increment in sleep quality was associated with a 2% increase in both the insulin sensitivity index(0,120) and homeostasis model assessment of insulin sensitivity, as well as a 1% decrease in homeostasis model assessment of β-cell function. In the present study, shorter sleep duration was mainly associated with later alterations in glucose homeostasis, whereas poorer sleep quality was mainly associated with earlier alterations in glucose homeostasis. Thus, adopting healthy sleep habits may benefit glucose metabolism in healthy populations. © 2012 The Authors. Diabetic Medicine © 2012 Diabetes UK.

  16. Seismic waveform sensitivity to global boundary topography

    NASA Astrophysics Data System (ADS)

    Colombi, Andrea; Nissen-Meyer, Tarje; Boschi, Lapo; Giardini, Domenico

    2012-09-01

    We investigate the implications of lateral variations in the topography of global seismic discontinuities, in the framework of high-resolution forward modelling and seismic imaging. We run 3-D wave-propagation simulations accurate at periods of 10 s and longer, with Earth models including core-mantle boundary topography anomalies of ˜1000 km spatial wavelength and up to 10 km height. We obtain very different waveform signatures for PcP (reflected) and Pdiff (diffracted) phases, supporting the theoretical expectation that the latter are sensitive primarily to large-scale structure, whereas the former only to small scale, where large and small are relative to the frequency. PcP at 10 s seems to be well suited to map such a small-scale perturbation, whereas Pdiff at the same frequency carries faint signatures that do not allow any tomographic reconstruction. Only at higher frequency, the signature becomes stronger. We present a new algorithm to compute sensitivity kernels relating seismic traveltimes (measured by cross-correlation of observed and theoretical seismograms) to the topography of seismic discontinuities at any depth in the Earth using full 3-D wave propagation. Calculation of accurate finite-frequency sensitivity kernels is notoriously expensive, but we reduce computational costs drastically by limiting ourselves to spherically symmetric reference models, and exploiting the axial symmetry of the resulting propagating wavefield that collapses to a 2-D numerical domain. We compute and analyse a suite of kernels for upper and lower mantle discontinuities that can be used for finite-frequency waveform inversion. The PcP and Pdiff sensitivity footprints are in good agreement with the result obtained cross-correlating perturbed and unperturbed seismogram, validating our approach against full 3-D modelling to invert for such structures.

  17. Following butter flavour deterioration with an acoustic wave sensor.

    PubMed

    Gaspar, Cláudia R B S; Gomes, M Teresa S R

    2012-09-15

    Off-flavours develop naturally in butter and the process is accelerated by heat. An acoustic wave sensor was used to detect the aroma compounds evolved from heated butter and the results have shown that registered marked changes were coincident to odour changes detected by sensory analysis. The flavour compounds have also been analysed by GC/MS for identification. The response of the sensor was fully characterized in terms of the sensitivity to each of the identified compounds, and sensitivities of the system SPME/sensor were compared with the sensitivities of the system SPME/GC/MS. It was found that the sensor analytical system was more sensitive to methylketones than to fatty acids. The SPME/GC/MS system also showed the highest sensitivity to 2-heptanone, followed by 2-nonanone, but third place was occupied by undecanone and butanoic acid, to which the sensor showed moderate sensitivity. 2-heptanone was found to be an appropriate model compound to follow odour changes till the 500 h, and the lower sensitivity of the sensor to butanoic acid showed to be a positive characteristic, as saturation was prevented, and other more subtle changes in the flavour could be perceived. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. Carbon nuclear magnetic resonance spectroscopic fingerprinting of commercial gasoline: pattern-recognition analyses for screening quality control purposes.

    PubMed

    Flumignan, Danilo Luiz; Boralle, Nivaldo; Oliveira, José Eduardo de

    2010-06-30

    In this work, the combination of carbon nuclear magnetic resonance ((13)C NMR) fingerprinting with pattern-recognition analyses provides an original and alternative approach to screening commercial gasoline quality. Soft Independent Modelling of Class Analogy (SIMCA) was performed on spectroscopic fingerprints to classify representative commercial gasoline samples, which were selected by Hierarchical Cluster Analyses (HCA) over several months in retails services of gas stations, into previously quality-defined classes. Following optimized (13)C NMR-SIMCA algorithm, sensitivity values were obtained in the training set (99.0%), with leave-one-out cross-validation, and external prediction set (92.0%). Governmental laboratories could employ this method as a rapid screening analysis to discourage adulteration practices. Copyright 2010 Elsevier B.V. All rights reserved.

  19. Application of simple negative feedback model for avalanche photodetectors investigation

    NASA Astrophysics Data System (ADS)

    Kushpil, V. V.

    2009-10-01

    A simple negative feedback model based on Miller's formula is used to investigate the properties of Avalanche Photodetectors (APDs). The proposed method can be applied to study classical APD as well as new type of devices, which are operating in the Internal Negative Feedback (INF) regime. The method shows a good sensitivity to technological APD parameters making it possible to use it as a tool to analyse various APD parameters. It also allows better understanding of the APD operation conditions. The simulations and experimental data analysis for different types of APDs are presented.

  20. Cost-effectiveness analysis of thiazolidinediones in uncontrolled type 2 diabetic patients receiving sulfonylureas and metformin in Thailand.

    PubMed

    Chirakup, Suphachai; Chaiyakunapruk, Nathorn; Chaikledkeaw, Usa; Pongcharoensuk, Petcharat; Ongphiphadhanakul, Boonsong; Roze, Stephane; Valentine, William J; Palmer, Andrew J

    2008-03-01

    The national essential drug committee in Thailand suggested that only one of thiazolidinediones be included in hospital formulary but little was know about their cost-effectiveness values. This study aims to determine an incremental cost-effectiveness ratio of pioglitazone 45 mg compared with rosiglitazone 8 mg in uncontrolled type 2 diabetic patients receiving sulfonylureas and metformin in Thailand. A Markov diabetes model (Center for Outcome Research model) was used in this study. Baseline characteristics of patients were based on Thai diabetes registry project. Costs of diabetes were calculated mainly from Buddhachinaraj hospital. Nonspecific mortality rate and transition probabilities of death from renal replacement therapy were obtained from Thai sources. Clinical effectiveness of thiazolidinediones was retrieved from a meta-analysis. All analyses were based on the government hospital policymaker perspective. Both cost and outcomes were discounted with the rate of 3%. Base-case analyses were analyzed as incremental cost per quality-adjusted life year (QALY) gained. A series of sensitive analyses were performed. In base-case analysis, the pioglitazone group had a better clinical outcomes and higher lifetime costs. The incremental cost per QALY gained was 186,246 baht (US$ 5389). The acceptability curves showed that the probability of pioglitazone being cost-effective was 29% at the willingness to pay of one time of Thai gross domestic product per capita (GDP per capita). The effect of pioglitazone on %HbA1c decrease was the most sensitive to the final outcomes. Our findings showed that in type 2 diabetic patients who cannot control their blood glucose under the combination of sulfonylurea and metformin, the use of pioglitazone 45 mg fell in the cost-effective range recommended by World Health Organization (one to three times of GDP per capita) on average, compared to rosiglitazone 8 mg. Nevertheless, based on sensitivity analysis, its probability of being cost-effective was quite low. Hospital policymakers may consider our findings as part of information for the decision-making process.

Top