Sample records for estimation sensitivity analysis

  1. AN OVERVIEW OF THE UNCERTAINTY ANALYSIS, SENSITIVITY ANALYSIS, AND PARAMETER ESTIMATION (UA/SA/PE) API AND HOW TO IMPLEMENT IT

    EPA Science Inventory

    The Application Programming Interface (API) for Uncertainty Analysis, Sensitivity Analysis, and
    Parameter Estimation (UA/SA/PE API) (also known as Calibration, Optimization and Sensitivity and Uncertainty (CUSO)) was developed in a joint effort between several members of both ...

  2. Dynamic Modeling of Cell-Free Biochemical Networks Using Effective Kinetic Models

    DTIC Science & Technology

    2015-03-16

    sensitivity value was the maximum uncertainty in that value estimated by the Sobol method. 2.4. Global Sensitivity Analysis of the Reduced Order Coagulation...sensitivity analysis, using the variance-based method of Sobol , to estimate which parameters controlled the performance of the reduced order model [69]. We...Environment. Comput. Sci. Eng. 2007, 9, 90–95. 69. Sobol , I. Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates

  3. Generalized sensitivity analysis of the minimal model of the intravenous glucose tolerance test.

    PubMed

    Munir, Mohammad

    2018-06-01

    Generalized sensitivity functions characterize the sensitivity of the parameter estimates with respect to the nominal parameters. We observe from the generalized sensitivity analysis of the minimal model of the intravenous glucose tolerance test that the measurements of insulin, 62 min after the administration of the glucose bolus into the experimental subject's body, possess no information about the parameter estimates. The glucose measurements possess the information about the parameter estimates up to three hours. These observations have been verified by the parameter estimation of the minimal model. The standard errors of the estimates and crude Monte Carlo process also confirm this observation. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Enabling Predictive Simulation and UQ of Complex Multiphysics PDE Systems by the Development of Goal-Oriented Variational Sensitivity Analysis and a-Posteriori Error Estimation Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Estep, Donald

    2015-11-30

    This project addressed the challenge of predictive computational analysis of strongly coupled, highly nonlinear multiphysics systems characterized by multiple physical phenomena that span a large range of length- and time-scales. Specifically, the project was focused on computational estimation of numerical error and sensitivity analysis of computational solutions with respect to variations in parameters and data. In addition, the project investigated the use of accurate computational estimates to guide efficient adaptive discretization. The project developed, analyzed and evaluated new variational adjoint-based techniques for integration, model, and data error estimation/control and sensitivity analysis, in evolutionary multiphysics multiscale simulations.

  5. Sensitivity analysis of add-on price estimate for select silicon wafering technologies

    NASA Technical Reports Server (NTRS)

    Mokashi, A. R.

    1982-01-01

    The cost of producing wafers from silicon ingots is a major component of the add-on price of silicon sheet. Economic analyses of the add-on price estimates and their sensitivity internal-diameter (ID) sawing, multiblade slurry (MBS) sawing and fixed-abrasive slicing technique (FAST) are presented. Interim price estimation guidelines (IPEG) are used for estimating a process add-on price. Sensitivity analysis of price is performed with respect to cost parameters such as equipment, space, direct labor, materials (blade life) and utilities, and the production parameters such as slicing rate, slices per centimeter and process yield, using a computer program specifically developed to do sensitivity analysis with IPEG. The results aid in identifying the important cost parameters and assist in deciding the direction of technology development efforts.

  6. A comparison of analysis methods to estimate contingency strength.

    PubMed

    Lloyd, Blair P; Staubitz, Johanna L; Tapp, Jon T

    2018-05-09

    To date, several data analysis methods have been used to estimate contingency strength, yet few studies have compared these methods directly. To compare the relative precision and sensitivity of four analysis methods (i.e., exhaustive event-based, nonexhaustive event-based, concurrent interval, concurrent+lag interval), we applied all methods to a simulated data set in which several response-dependent and response-independent schedules of reinforcement were programmed. We evaluated the degree to which contingency strength estimates produced from each method (a) corresponded with expected values for response-dependent schedules and (b) showed sensitivity to parametric manipulations of response-independent reinforcement. Results indicated both event-based methods produced contingency strength estimates that aligned with expected values for response-dependent schedules, but differed in sensitivity to response-independent reinforcement. The precision of interval-based methods varied by analysis method (concurrent vs. concurrent+lag) and schedule type (continuous vs. partial), and showed similar sensitivities to response-independent reinforcement. Recommendations and considerations for measuring contingencies are identified. © 2018 Society for the Experimental Analysis of Behavior.

  7. Predictive Uncertainty And Parameter Sensitivity Of A Sediment-Flux Model: Nitrogen Flux and Sediment Oxygen Demand

    EPA Science Inventory

    Estimating model predictive uncertainty is imperative to informed environmental decision making and management of water resources. This paper applies the Generalized Sensitivity Analysis (GSA) to examine parameter sensitivity and the Generalized Likelihood Uncertainty Estimation...

  8. Bayesian Estimation of the True Prevalence and of the Diagnostic Test Sensitivity and Specificity of Enteropathogenic Yersinia in Finnish Pig Serum Samples.

    PubMed

    Vilar, M J; Ranta, J; Virtanen, S; Korkeala, H

    2015-01-01

    Bayesian analysis was used to estimate the pig's and herd's true prevalence of enteropathogenic Yersinia in serum samples collected from Finnish pig farms. The sensitivity and specificity of the diagnostic test were also estimated for the commercially available ELISA which is used for antibody detection against enteropathogenic Yersinia. The Bayesian analysis was performed in two steps; the first step estimated the prior true prevalence of enteropathogenic Yersinia with data obtained from a systematic review of the literature. In the second step, data of the apparent prevalence (cross-sectional study data), prior true prevalence (first step), and estimated sensitivity and specificity of the diagnostic methods were used for building the Bayesian model. The true prevalence of Yersinia in slaughter-age pigs was 67.5% (95% PI 63.2-70.9). The true prevalence of Yersinia in sows was 74.0% (95% PI 57.3-82.4). The estimates of sensitivity and specificity values of the ELISA were 79.5% and 96.9%.

  9. Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks

    PubMed Central

    Arampatzis, Georgios; Katsoulakis, Markos A.; Pantazis, Yannis

    2015-01-01

    Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in “sloppy” systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the number of the sensitive parameters. PMID:26161544

  10. Using Dynamic Sensitivity Analysis to Assess Testability

    NASA Technical Reports Server (NTRS)

    Voas, Jeffrey; Morell, Larry; Miller, Keith

    1990-01-01

    This paper discusses sensitivity analysis and its relationship to random black box testing. Sensitivity analysis estimates the impact that a programming fault at a particular location would have on the program's input/output behavior. Locations that are relatively \\"insensitive" to faults can render random black box testing unlikely to uncover programming faults. Therefore, sensitivity analysis gives new insight when interpreting random black box testing results. Although sensitivity analysis is computationally intensive, it requires no oracle and no human intervention.

  11. Estimating Sobol Sensitivity Indices Using Correlations

    EPA Science Inventory

    Sensitivity analysis is a crucial tool in the development and evaluation of complex mathematical models. Sobol's method is a variance-based global sensitivity analysis technique that has been applied to computational models to assess the relative importance of input parameters on...

  12. Inferring Instantaneous, Multivariate and Nonlinear Sensitivities for the Analysis of Feedback Processes in a Dynamical System: Lorenz Model Case Study

    NASA Technical Reports Server (NTRS)

    Aires, Filipe; Rossow, William B.; Hansen, James E. (Technical Monitor)

    2001-01-01

    A new approach is presented for the analysis of feedback processes in a nonlinear dynamical system by observing its variations. The new methodology consists of statistical estimates of the sensitivities between all pairs of variables in the system based on a neural network modeling of the dynamical system. The model can then be used to estimate the instantaneous, multivariate and nonlinear sensitivities, which are shown to be essential for the analysis of the feedbacks processes involved in the dynamical system. The method is described and tested on synthetic data from the low-order Lorenz circulation model where the correct sensitivities can be evaluated analytically.

  13. Further comments on sensitivities, parameter estimation, and sampling design in one-dimensional analysis of solute transport in porous media

    USGS Publications Warehouse

    Knopman, Debra S.; Voss, Clifford I.

    1988-01-01

    Sensitivities of solute concentration to parameters associated with first-order chemical decay, boundary conditions, initial conditions, and multilayer transport are examined in one-dimensional analytical models of transient solute transport in porous media. A sensitivity is a change in solute concentration resulting from a change in a model parameter. Sensitivity analysis is important because minimum information required in regression on chemical data for the estimation of model parameters by regression is expressed in terms of sensitivities. Nonlinear regression models of solute transport were tested on sets of noiseless observations from known models that exceeded the minimum sensitivity information requirements. Results demonstrate that the regression models consistently converged to the correct parameters when the initial sets of parameter values substantially deviated from the correct parameters. On the basis of the sensitivity analysis, several statements may be made about design of sampling for parameter estimation for the models examined: (1) estimation of parameters associated with solute transport in the individual layers of a multilayer system is possible even when solute concentrations in the individual layers are mixed in an observation well; (2) when estimating parameters in a decaying upstream boundary condition, observations are best made late in the passage of the front near a time chosen by adding the inverse of an hypothesized value of the source decay parameter to the estimated mean travel time at a given downstream location; (3) estimation of a first-order chemical decay parameter requires observations to be made late in the passage of the front, preferably near a location corresponding to a travel time of √2 times the half-life of the solute; and (4) estimation of a parameter relating to spatial variability in an initial condition requires observations to be made early in time relative to passage of the solute front.

  14. A three-dimensional cohesive sediment transport model with data assimilation: Model development, sensitivity analysis and parameter estimation

    NASA Astrophysics Data System (ADS)

    Wang, Daosheng; Cao, Anzhou; Zhang, Jicai; Fan, Daidu; Liu, Yongzhi; Zhang, Yue

    2018-06-01

    Based on the theory of inverse problems, a three-dimensional sigma-coordinate cohesive sediment transport model with the adjoint data assimilation is developed. In this model, the physical processes of cohesive sediment transport, including deposition, erosion and advection-diffusion, are parameterized by corresponding model parameters. These parameters are usually poorly known and have traditionally been assigned empirically. By assimilating observations into the model, the model parameters can be estimated using the adjoint method; meanwhile, the data misfit between model results and observations can be decreased. The model developed in this work contains numerous parameters; therefore, it is necessary to investigate the parameter sensitivity of the model, which is assessed by calculating a relative sensitivity function and the gradient of the cost function with respect to each parameter. The results of parameter sensitivity analysis indicate that the model is sensitive to the initial conditions, inflow open boundary conditions, suspended sediment settling velocity and resuspension rate, while the model is insensitive to horizontal and vertical diffusivity coefficients. A detailed explanation of the pattern of sensitivity analysis is also given. In ideal twin experiments, constant parameters are estimated by assimilating 'pseudo' observations. The results show that the sensitive parameters are estimated more easily than the insensitive parameters. The conclusions of this work can provide guidance for the practical applications of this model to simulate sediment transport in the study area.

  15. Benefit-Cost Analysis of Integrated Paratransit Systems : Volume 6. Technical Appendices.

    DOT National Transportation Integrated Search

    1979-09-01

    This last volume, includes five technical appendices which document the methodologies used in the benefit-cost analysis. They are the following: Scenario analysis methodology; Impact estimation; Example of impact estimation; Sensitivity analysis; Agg...

  16. Neurobehavioral deficits, diseases, and associated costs of exposure to endocrine-disrupting chemicals in the European Union.

    PubMed

    Bellanger, Martine; Demeneix, Barbara; Grandjean, Philippe; Zoeller, R Thomas; Trasande, Leonardo

    2015-04-01

    Epidemiological studies and animal models demonstrate that endocrine-disrupting chemicals (EDCs) contribute to cognitive deficits and neurodevelopmental disabilities. The objective was to estimate neurodevelopmental disability and associated costs that can be reasonably attributed to EDC exposure in the European Union. An expert panel applied a weight-of-evidence characterization adapted from the Intergovernmental Panel on Climate Change. Exposure-response relationships and reference levels were evaluated for relevant EDCs, and biomarker data were organized from peer-reviewed studies to represent European exposure and approximate burden of disease. Cost estimation as of 2010 utilized lifetime economic productivity estimates, lifetime cost estimates for autism spectrum disorder, and annual costs for attention-deficit hyperactivity disorder. Setting, Patients and Participants, and Intervention: Cost estimation was carried out from a societal perspective, ie, including direct costs (eg, treatment costs) and indirect costs such as productivity loss. The panel identified a 70-100% probability that polybrominated diphenyl ether and organophosphate exposures contribute to IQ loss in the European population. Polybrominated diphenyl ether exposures were associated with 873,000 (sensitivity analysis, 148,000 to 2.02 million) lost IQ points and 3290 (sensitivity analysis, 3290 to 8080) cases of intellectual disability, at costs of €9.59 billion (sensitivity analysis, €1.58 billion to €22.4 billion). Organophosphate exposures were associated with 13.0 million (sensitivity analysis, 4.24 million to 17.1 million) lost IQ points and 59 300 (sensitivity analysis, 16,500 to 84,400) cases of intellectual disability, at costs of €146 billion (sensitivity analysis, €46.8 billion to €194 billion). Autism spectrum disorder causation by multiple EDCs was assigned a 20-39% probability, with 316 (sensitivity analysis, 126-631) attributable cases at a cost of €199 million (sensitivity analysis, €79.7 million to €399 million). Attention-deficit hyperactivity disorder causation by multiple EDCs was assigned a 20-69% probability, with 19 300 to 31 200 attributable cases at a cost of €1.21 billion to €2.86 billion. EDC exposures in Europe contribute substantially to neurobehavioral deficits and disease, with a high probability of >€150 billion costs/year. These results emphasize the advantages of controlling EDC exposure.

  17. CALIBRATION, OPTIMIZATION, AND SENSITIVITY AND UNCERTAINTY ALGORITHMS APPLICATION PROGRAMMING INTERFACE (COSU-API)

    EPA Science Inventory

    The Application Programming Interface (API) for Uncertainty Analysis, Sensitivity Analysis, and Parameter Estimation (UA/SA/PE API) tool development, here fore referred to as the Calibration, Optimization, and Sensitivity and Uncertainty Algorithms API (COSU-API), was initially d...

  18. Univariate and bivariate likelihood-based meta-analysis methods performed comparably when marginal sensitivity and specificity were the targets of inference.

    PubMed

    Dahabreh, Issa J; Trikalinos, Thomas A; Lau, Joseph; Schmid, Christopher H

    2017-03-01

    To compare statistical methods for meta-analysis of sensitivity and specificity of medical tests (e.g., diagnostic or screening tests). We constructed a database of PubMed-indexed meta-analyses of test performance from which 2 × 2 tables for each included study could be extracted. We reanalyzed the data using univariate and bivariate random effects models fit with inverse variance and maximum likelihood methods. Analyses were performed using both normal and binomial likelihoods to describe within-study variability. The bivariate model using the binomial likelihood was also fit using a fully Bayesian approach. We use two worked examples-thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities-to highlight that different meta-analysis approaches can produce different results. We also present results from reanalysis of 308 meta-analyses of sensitivity and specificity. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5% or greater were observed in 12% and 5% of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated greater uncertainty around those estimates. Bivariate models produced imprecise estimates of the between-study correlation of sensitivity and specificity. Differences between methods were larger with increasing proportion of studies that were small or required a continuity correction. The binomial likelihood should be used to model within-study variability. Univariate and bivariate models give similar estimates of the marginal distributions for sensitivity and specificity. Bayesian methods fully quantify uncertainty and their ability to incorporate external evidence may be useful for imprecisely estimated parameters. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. CXTFIT/Excel A modular adaptable code for parameter estimation, sensitivity analysis and uncertainty analysis for laboratory or field tracer experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Guoping; Mayes, Melanie; Parker, Jack C

    2010-01-01

    We implemented the widely used CXTFIT code in Excel to provide flexibility and added sensitivity and uncertainty analysis functions to improve transport parameter estimation and to facilitate model discrimination for multi-tracer experiments on structured soils. Analytical solutions for one-dimensional equilibrium and nonequilibrium convection dispersion equations were coded as VBA functions so that they could be used as ordinary math functions in Excel for forward predictions. Macros with user-friendly interfaces were developed for optimization, sensitivity analysis, uncertainty analysis, error propagation, response surface calculation, and Monte Carlo analysis. As a result, any parameter with transformations (e.g., dimensionless, log-transformed, species-dependent reactions, etc.) couldmore » be estimated with uncertainty and sensitivity quantification for multiple tracer data at multiple locations and times. Prior information and observation errors could be incorporated into the weighted nonlinear least squares method with a penalty function. Users are able to change selected parameter values and view the results via embedded graphics, resulting in a flexible tool applicable to modeling transport processes and to teaching students about parameter estimation. The code was verified by comparing to a number of benchmarks with CXTFIT 2.0. It was applied to improve parameter estimation for four typical tracer experiment data sets in the literature using multi-model evaluation and comparison. Additional examples were included to illustrate the flexibilities and advantages of CXTFIT/Excel. The VBA macros were designed for general purpose and could be used for any parameter estimation/model calibration when the forward solution is implemented in Excel. A step-by-step tutorial, example Excel files and the code are provided as supplemental material.« less

  20. Sensitivity of wildlife habitat models to uncertainties in GIS data

    NASA Technical Reports Server (NTRS)

    Stoms, David M.; Davis, Frank W.; Cogan, Christopher B.

    1992-01-01

    Decision makers need to know the reliability of output products from GIS analysis. For many GIS applications, it is not possible to compare these products to an independent measure of 'truth'. Sensitivity analysis offers an alternative means of estimating reliability. In this paper, we present a CIS-based statistical procedure for estimating the sensitivity of wildlife habitat models to uncertainties in input data and model assumptions. The approach is demonstrated in an analysis of habitat associations derived from a GIS database for the endangered California condor. Alternative data sets were generated to compare results over a reasonable range of assumptions about several sources of uncertainty. Sensitivity analysis indicated that condor habitat associations are relatively robust, and the results have increased our confidence in our initial findings. Uncertainties and methods described in the paper have general relevance for many GIS applications.

  1. Convergence Estimates for Multidisciplinary Analysis and Optimization

    NASA Technical Reports Server (NTRS)

    Arian, Eyal

    1997-01-01

    A quantitative analysis of coupling between systems of equations is introduced. This analysis is then applied to problems in multidisciplinary analysis, sensitivity, and optimization. For the sensitivity and optimization problems both multidisciplinary and single discipline feasibility schemes are considered. In all these cases a "convergence factor" is estimated in terms of the Jacobians and Hessians of the system, thus it can also be approximated by existing disciplinary analysis and optimization codes. The convergence factor is identified with the measure for the "coupling" between the disciplines in the system. Applications to algorithm development are discussed. Demonstration of the convergence estimates and numerical results are given for a system composed of two non-linear algebraic equations, and for a system composed of two PDEs modeling aeroelasticity.

  2. Dynamic Modeling of the Human Coagulation Cascade Using Reduced Order Effective Kinetic Models (Open Access)

    DTIC Science & Technology

    2015-03-16

    shaded region around each total sensitivity value was the maximum uncertainty in that value estimated by the Sobol method. 2.4. Global Sensitivity...Performance We conducted a global sensitivity analysis, using the variance-based method of Sobol , to estimate which parameters controlled the...Hunter, J.D. Matplotlib: A 2D Graphics Environment. Comput. Sci. Eng. 2007, 9, 90–95. 69. Sobol , I. Global sensitivity indices for nonlinear

  3. Data challenges in estimating the capacity value of solar photovoltaics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gami, Dhruv; Sioshansi, Ramteen; Denholm, Paul

    We examine the robustness of solar capacity-value estimates to three important data issues. The first is the sensitivity to using hourly averaged as opposed to subhourly solar-insolation data. The second is the sensitivity to errors in recording and interpreting load data. The third is the sensitivity to using modeled as opposed to measured solar-insolation data. We demonstrate that capacity-value estimates of solar are sensitive to all three of these factors, with potentially large errors in the capacity-value estimate in a particular year. If multiple years of data are available, the biases introduced by using hourly averaged solar-insolation can be smoothedmore » out. Multiple years of data will not necessarily address the other data-related issues that we examine. Our analysis calls into question the accuracy of a number of solar capacity-value estimates relying exclusively on modeled solar-insolation data that are reported in the literature (including our own previous works). Lastly, our analysis also suggests that multiple years’ historical data should be used for remunerating solar generators for their capacity value in organized wholesale electricity markets.« less

  4. Data Challenges in Estimating the Capacity Value of Solar Photovoltaics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gami, Dhruv; Sioshansi, Ramteen; Denholm, Paul

    We examine the robustness of solar capacity-value estimates to three important data issues. The first is the sensitivity to using hourly averaged as opposed to subhourly solar-insolation data. The second is the sensitivity to errors in recording and interpreting load data. The third is the sensitivity to using modeled as opposed to measured solar-insolation data. We demonstrate that capacity-value estimates of solar are sensitive to all three of these factors, with potentially large errors in the capacity-value estimate in a particular year. If multiple years of data are available, the biases introduced by using hourly averaged solar-insolation can be smoothedmore » out. Multiple years of data will not necessarily address the other data-related issues that we examine. Our analysis calls into question the accuracy of a number of solar capacity-value estimates relying exclusively on modeled solar-insolation data that are reported in the literature (including our own previous works). Our analysis also suggests that multiple years' historical data should be used for remunerating solar generators for their capacity value in organized wholesale electricity markets.« less

  5. Data challenges in estimating the capacity value of solar photovoltaics

    DOE PAGES

    Gami, Dhruv; Sioshansi, Ramteen; Denholm, Paul

    2017-04-30

    We examine the robustness of solar capacity-value estimates to three important data issues. The first is the sensitivity to using hourly averaged as opposed to subhourly solar-insolation data. The second is the sensitivity to errors in recording and interpreting load data. The third is the sensitivity to using modeled as opposed to measured solar-insolation data. We demonstrate that capacity-value estimates of solar are sensitive to all three of these factors, with potentially large errors in the capacity-value estimate in a particular year. If multiple years of data are available, the biases introduced by using hourly averaged solar-insolation can be smoothedmore » out. Multiple years of data will not necessarily address the other data-related issues that we examine. Our analysis calls into question the accuracy of a number of solar capacity-value estimates relying exclusively on modeled solar-insolation data that are reported in the literature (including our own previous works). Lastly, our analysis also suggests that multiple years’ historical data should be used for remunerating solar generators for their capacity value in organized wholesale electricity markets.« less

  6. A novel hypothesis on the sensitivity of the fecal occult blood test: Results of a joint analysis of 3 randomized controlled trials.

    PubMed

    Lansdorp-Vogelaar, Iris; van Ballegooijen, Marjolein; Boer, Rob; Zauber, Ann; Habbema, J Dik F

    2009-06-01

    Estimates of the fecal occult blood test (FOBT) (Hemoccult II) sensitivity differed widely between screening trials and led to divergent conclusions on the effects of FOBT screening. We used microsimulation modeling to estimate a preclinical colorectal cancer (CRC) duration and sensitivity for unrehydrated FOBT from the data of 3 randomized controlled trials of Minnesota, Nottingham, and Funen. In addition to 2 usual hypotheses on the sensitivity of FOBT, we tested a novel hypothesis where sensitivity is linked to the stage of clinical diagnosis in the situation without screening. We used the MISCAN-Colon microsimulation model to estimate sensitivity and duration, accounting for differences between the trials in demography, background incidence, and trial design. We tested 3 hypotheses for FOBT sensitivity: sensitivity is the same for all preclinical CRC stages, sensitivity increases with each stage, and sensitivity is higher for the stage in which the cancer would have been diagnosed in the absence of screening than for earlier stages. Goodness-of-fit was evaluated by comparing expected and observed rates of screen-detected and interval CRC. The hypothesis with a higher sensitivity in the stage of clinical diagnosis gave the best fit. Under this hypothesis, sensitivity of FOBT was 51% in the stage of clinical diagnosis and 19% in earlier stages. The average duration of preclinical CRC was estimated at 6.7 years. Our analysis corroborated a long duration of preclinical CRC, with FOBT most sensitive in the stage of clinical diagnosis. (c) 2009 American Cancer Society.

  7. Diagnostic performance of contrast-enhanced spectral mammography: Systematic review and meta-analysis.

    PubMed

    Tagliafico, Alberto Stefano; Bignotti, Bianca; Rossi, Federica; Signori, Alessio; Sormani, Maria Pia; Valdora, Francesca; Calabrese, Massimo; Houssami, Nehmat

    2016-08-01

    To estimate sensitivity and specificity of CESM for breast cancer diagnosis. Systematic review and meta-analysis of the accuracy of CESM in finding breast cancer in highly selected women. We estimated summary receiver operating characteristic curves, sensitivity and specificity according to quality criteria with QUADAS-2. Six hundred four studies were retrieved, 8 of these reporting on 920 patients with 994 lesions, were eligible for inclusion. Estimated sensitivity from all studies was: 0.98 (95% CI: 0.96-1.00). Specificity was estimated from six studies reporting raw data: 0.58 (95% CI: 0.38-0.77). The majority of studies were scored as at high risk of bias due to the very selected populations. CESM has a high sensitivity but very low specificity. The source studies were based on highly selected case series and prone to selection bias. High-quality studies are required to assess the accuracy of CESM in unselected cases. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. PREVALENCE OF METABOLIC SYNDROME IN YOUNG MEXICANS: A SENSITIVITY ANALYSIS ON ITS COMPONENTS.

    PubMed

    Murguía-Romero, Miguel; Jiménez-Flores, J Rafael; Sigrist-Flores, Santiago C; Tapia-Pancardo, Diana C; Jiménez-Ramos, Arnulfo; Méndez-Cruz, A René; Villalobos-Molina, Rafael

    2015-07-28

    obesity is a worldwide epidemic, and the high prevalence of diabetes type II (DM2) and cardiovascular disease (CVD) is in great part a consequence of that epidemic. Metabolic syndrome is a useful tool to estimate the risk of a young population to evolve to DM2 and CVD. to estimate the MetS prevalence in young Mexicans, and to evaluate each parameter as an independent indicator through a sensitivity analysis. the prevalence of MetS was estimated in 6 063 young of the México City metropolitan area. A sensitivity analysis was conducted to estimate the performance of each one of the components of MetS, as an indicator of the presence of MetS itself. Five statistical of the sensitivity analysis were calculated for each MetS component and the other parameters included: sensitivity, specificity, positive predictive value or precision, negative predictive value, and accuracy. the prevalence of MetS in Mexican young population was estimated to be 13.4%. Waist circumference presented the highest sensitivity (96.8% women; 90.0% men), blood pressure presented the highest specificity for women (97.7%) and glucose for men (91.0%). When all the five statistical are considered triglycerides is the component with the highest values, showing a value of 75% or more in four of them. Differences by sex are detected for averages of all components of MetS in young without alterations. Mexican young are highly prone to acquire MetS: 71% have at least one and up to five MetS parameters altered, and 13.4% of them have MetS. From all the five components of MetS, waist circumference presented the highest sensitivity as a predictor of MetS, and triglycerides is the best parameter if a single factor is to be taken as sole predictor of MetS in Mexican young population, triglycerides is also the parameter with the highest accuracy. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  9. Estimation of the sensitive volume for gravitational-wave source populations using weighted Monte Carlo integration

    NASA Astrophysics Data System (ADS)

    Tiwari, Vaibhav

    2018-07-01

    The population analysis and estimation of merger rates of compact binaries is one of the important topics in gravitational wave astronomy. The primary ingredient in these analyses is the population-averaged sensitive volume. Typically, sensitive volume, of a given search to a given simulated source population, is estimated by drawing signals from the population model and adding them to the detector data as injections. Subsequently injections, which are simulated gravitational waveforms, are searched for by the search pipelines and their signal-to-noise ratio (SNR) is determined. Sensitive volume is estimated, by using Monte-Carlo (MC) integration, from the total number of injections added to the data, the number of injections that cross a chosen threshold on SNR and the astrophysical volume in which the injections are placed. So far, only fixed population models have been used in the estimation of binary black holes (BBH) merger rates. However, as the scope of population analysis broaden in terms of the methodologies and source properties considered, due to an increase in the number of observed gravitational wave (GW) signals, the procedure will need to be repeated multiple times at a large computational cost. In this letter we address the problem by performing a weighted MC integration. We show how a single set of generic injections can be weighted to estimate the sensitive volume for multiple population models; thereby greatly reducing the computational cost. The weights in this MC integral are the ratios of the output probabilities, determined by the population model and standard cosmology, and the injection probability, determined by the distribution function of the generic injections. Unlike analytical/semi-analytical methods, which usually estimate sensitive volume using single detector sensitivity, the method is accurate within statistical errors, comes at no added cost and requires minimal computational resources.

  10. Assessing the sensitivity of bovine tuberculosis surveillance in Canada's cattle population, 2009-2013.

    PubMed

    El Allaki, Farouk; Harrington, Noel; Howden, Krista

    2016-11-01

    The objectives of this study were (1) to estimate the annual sensitivity of Canada's bTB surveillance system and its three system components (slaughter surveillance, export testing and disease investigation) using a scenario tree modelling approach, and (2) to identify key model parameters that influence the estimates of the surveillance system sensitivity (SSSe). To achieve these objectives, we designed stochastic scenario tree models for three surveillance system components included in the analysis. Demographic data, slaughter data, export testing data, and disease investigation data from 2009 to 2013 were extracted for input into the scenario trees. Sensitivity analysis was conducted to identify key influential parameters on SSSe estimates. The median annual SSSe estimates generated from the study were very high, ranging from 0.95 (95% probability interval [PI]: 0.88-0.98) to 0.97 (95% PI: 0.93-0.99). Median annual sensitivity estimates for the slaughter surveillance component ranged from 0.95 (95% PI: 0.88-0.98) to 0.97 (95% PI: 0.93-0.99). This shows that slaughter surveillance to be the major contributor to overall surveillance system sensitivity with a high probability to detect M. bovis infection if present at a prevalence of 0.00028% or greater during the study period. The export testing and disease investigation components had extremely low component sensitivity estimates-the maximum median sensitivity estimates were 0.02 (95% PI: 0.014-0.023) and 0.0061 (95% PI: 0.0056-0.0066) respectively. The three most influential input parameters on the model's output (SSSe) were the probability of a granuloma being detected at slaughter inspection, the probability of a granuloma being present in older animals (≥12 months of age), and the probability of a granuloma sample being submitted to the laboratory. Additional studies are required to reduce the levels of uncertainty and variability associated with these three parameters influencing the surveillance system sensitivity. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.

  11. Using archived ITS data for sensitivity analysis in the estimation of mobile source emissions

    DOT National Transportation Integrated Search

    2000-12-01

    The study described in this paper demonstrates the use of archived ITS data from San Antonio's TransGuide traffic management center (TMC) for sensitivity analyses in the estimation of on-road mobile source emissions. Because of the stark comparison b...

  12. Case-Deletion Diagnostics for Maximum Likelihood Multipoint Quantitative Trait Locus Linkage Analysis

    PubMed Central

    Mendoza, Maria C.B.; Burns, Trudy L.; Jones, Michael P.

    2009-01-01

    Objectives Case-deletion diagnostic methods are tools that allow identification of influential observations that may affect parameter estimates and model fitting conclusions. The goal of this paper was to develop two case-deletion diagnostics, the exact case deletion (ECD) and the empirical influence function (EIF), for detecting outliers that can affect results of sib-pair maximum likelihood quantitative trait locus (QTL) linkage analysis. Methods Subroutines to compute the ECD and EIF were incorporated into the maximum likelihood QTL variance estimation components of the linkage analysis program MAPMAKER/SIBS. Performance of the diagnostics was compared in simulation studies that evaluated the proportion of outliers correctly identified (sensitivity), and the proportion of non-outliers correctly identified (specificity). Results Simulations involving nuclear family data sets with one outlier showed EIF sensitivities approximated ECD sensitivities well for outlier-affected parameters. Sensitivities were high, indicating the outlier was identified a high proportion of the time. Simulations also showed the enormous computational time advantage of the EIF. Diagnostics applied to body mass index in nuclear families detected observations influential on the lod score and model parameter estimates. Conclusions The EIF is a practical diagnostic tool that has the advantages of high sensitivity and quick computation. PMID:19172086

  13. Diagnostic Performance of CT for Diagnosis of Fat-Poor Angiomyolipoma in Patients With Renal Masses: A Systematic Review and Meta-Analysis.

    PubMed

    Woo, Sungmin; Suh, Chong Hyun; Cho, Jeong Yeon; Kim, Sang Youn; Kim, Seung Hyup

    2017-11-01

    The purpose of this article is to systematically review and perform a meta-analysis of the diagnostic performance of CT for diagnosis of fat-poor angiomyolipoma (AML) in patients with renal masses. MEDLINE and EMBASE were systematically searched up to February 2, 2017. We included diagnostic accuracy studies that used CT for diagnosis of fat-poor AML in patients with renal masses, using pathologic examination as the reference standard. Two independent reviewers assessed the methodologic quality using the Quality Assessment of Diagnostic Accuracy Studies-2 tool. Sensitivity and specificity of included studies were calculated and were pooled and plotted in a hierarchic summary ROC plot. Sensitivity analyses using several clinically relevant covariates were performed to explore heterogeneity. Fifteen studies (2258 patients) were included. Pooled sensitivity and specificity were 0.67 (95% CI, 0.48-0.81) and 0.97 (95% CI, 0.89-0.99), respectively. Substantial and considerable heterogeneity was present with regard to sensitivity and specificity (I 2 = 91.21% and 78.53%, respectively). At sensitivity analyses, the specificity estimates were comparable and consistently high across all subgroups (0.93-1.00), but sensitivity estimates showed significant variation (0.14-0.82). Studies using pixel distribution analysis (n = 3) showed substantially lower sensitivity estimates (0.14; 95% CI, 0.04-0.40) compared with the remaining 12 studies (0.81; 95% CI, 0.76-0.85). CT shows moderate sensitivity and excellent specificity for diagnosis of fat-poor AML in patients with renal masses. When methods other than pixel distribution analysis are used, better sensitivity can be achieved.

  14. BAYESIAN ANALYSIS TO EVALUATE TESTS FOR THE DETECTION OF MYCOBACTERIUM BOVIS INFECTION IN FREE-RANGING WILD BISON (BISON BISON ATHABASCAE) IN THE ABSENCE OF A GOLD STANDARD.

    PubMed

    Chapinal, Núria; Schumaker, Brant A; Joly, Damien O; Elkin, Brett T; Stephen, Craig

    2015-07-01

    We estimated the sensitivity and specificity of the caudal-fold skin test (CFT), the fluorescent polarization assay (FPA), and the rapid lateral-flow test (RT) for the detection of Mycobacterium bovis in free-ranging wild wood bison (Bison bison athabascae), in the absence of a gold standard, by using Bayesian analysis, and then used those estimates to forecast the performance of a pairwise combination of tests in parallel. In 1998-99, 212 wood bison from Wood Buffalo National Park (Canada) were tested for M. bovis infection using CFT and two serologic tests (FPA and RT). The sensitivity and specificity of each test were estimated using a three-test, one-population, Bayesian model allowing for conditional dependence between FPA and RT. The sensitivity and specificity of the combination of CFT and each serologic test in parallel were calculated assuming conditional independence. The test performance estimates were influenced by the prior values chosen. However, the rank of tests and combinations of tests based on those estimates remained constant. The CFT was the most sensitive test and the FPA was the least sensitive, whereas RT was the most specific test and CFT was the least specific. In conclusion, given the fact that gold standards for the detection of M. bovis are imperfect and difficult to obtain in the field, Bayesian analysis holds promise as a tool to rank tests and combinations of tests based on their performance. Combining a skin test with an animal-side serologic test, such as RT, increases sensitivity in the detection of M. bovis and is a good approach to enhance disease eradication or control in wild bison.

  15. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST)

    PubMed Central

    Xu, Chonggang; Gertner, George

    2013-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037

  16. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST).

    PubMed

    Xu, Chonggang; Gertner, George

    2011-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.

  17. Regionalising MUSLE factors for application to a data-scarce catchment

    NASA Astrophysics Data System (ADS)

    Gwapedza, David; Slaughter, Andrew; Hughes, Denis; Mantel, Sukhmani

    2018-04-01

    The estimation of soil loss and sediment transport is important for effective management of catchments. A model for semi-arid catchments in southern Africa has been developed; however, simplification of the model parameters and further testing are required. Soil loss is calculated through the Modified Universal Soil Loss Equation (MUSLE). The aims of the current study were to: (1) regionalise the MUSLE erodibility factors and; (2) perform a sensitivity analysis and validate the soil loss outputs against independently-estimated measures. The regionalisation was developed using Geographic Information Systems (GIS) coverages. The model was applied to a high erosion semi-arid region in the Eastern Cape, South Africa. Sensitivity analysis indicated model outputs to be more sensitive to the vegetation cover factor. The simulated soil loss estimates of 40 t ha-1 yr-1 were within the range of estimates by previous studies. The outcome of the present research is a framework for parameter estimation for the MUSLE through regionalisation. This is part of the ongoing development of a model which can estimate soil loss and sediment delivery at broad spatial and temporal scales.

  18. Parameter Estimation and Sensitivity Analysis of an Urban Surface Energy Balance Parameterization at a Tropical Suburban Site

    NASA Astrophysics Data System (ADS)

    Harshan, S.; Roth, M.; Velasco, E.

    2014-12-01

    Forecasting of the urban weather and climate is of great importance as our cities become more populated and considering the combined effects of global warming and local land use changes which make urban inhabitants more vulnerable to e.g. heat waves and flash floods. In meso/global scale models, urban parameterization schemes are used to represent the urban effects. However, these schemes require a large set of input parameters related to urban morphological and thermal properties. Obtaining all these parameters through direct measurements are usually not feasible. A number of studies have reported on parameter estimation and sensitivity analysis to adjust and determine the most influential parameters for land surface schemes in non-urban areas. Similar work for urban areas is scarce, in particular studies on urban parameterization schemes in tropical cities have so far not been reported. In order to address above issues, the town energy balance (TEB) urban parameterization scheme (part of the SURFEX land surface modeling system) was subjected to a sensitivity and optimization/parameter estimation experiment at a suburban site in, tropical Singapore. The sensitivity analysis was carried out as a screening test to identify the most sensitive or influential parameters. Thereafter, an optimization/parameter estimation experiment was performed to calibrate the input parameter. The sensitivity experiment was based on the "improved Sobol's global variance decomposition method" . The analysis showed that parameters related to road, roof and soil moisture have significant influence on the performance of the model. The optimization/parameter estimation experiment was performed using the AMALGM (a multi-algorithm genetically adaptive multi-objective method) evolutionary algorithm. The experiment showed a remarkable improvement compared to the simulations using the default parameter set. The calibrated parameters from this optimization experiment can be used for further model validation studies to identify inherent deficiencies in model physics.

  19. Inverse modeling for seawater intrusion in coastal aquifers: Insights about parameter sensitivities, variances, correlations and estimation procedures derived from the Henry problem

    USGS Publications Warehouse

    Sanz, E.; Voss, C.I.

    2006-01-01

    Inverse modeling studies employing data collected from the classic Henry seawater intrusion problem give insight into several important aspects of inverse modeling of seawater intrusion problems and effective measurement strategies for estimation of parameters for seawater intrusion. Despite the simplicity of the Henry problem, it embodies the behavior of a typical seawater intrusion situation in a single aquifer. Data collected from the numerical problem solution are employed without added noise in order to focus on the aspects of inverse modeling strategies dictated by the physics of variable-density flow and solute transport during seawater intrusion. Covariances of model parameters that can be estimated are strongly dependent on the physics. The insights gained from this type of analysis may be directly applied to field problems in the presence of data errors, using standard inverse modeling approaches to deal with uncertainty in data. Covariance analysis of the Henry problem indicates that in order to generally reduce variance of parameter estimates, the ideal places to measure pressure are as far away from the coast as possible, at any depth, and the ideal places to measure concentration are near the bottom of the aquifer between the center of the transition zone and its inland fringe. These observations are located in and near high-sensitivity regions of system parameters, which may be identified in a sensitivity analysis with respect to several parameters. However, both the form of error distribution in the observations and the observation weights impact the spatial sensitivity distributions, and different choices for error distributions or weights can result in significantly different regions of high sensitivity. Thus, in order to design effective sampling networks, the error form and weights must be carefully considered. For the Henry problem, permeability and freshwater inflow can be estimated with low estimation variance from only pressure or only concentration observations. Permeability, freshwater inflow, solute molecular diffusivity, and porosity can be estimated with roughly equivalent confidence using observations of only the logarithm of concentration. Furthermore, covariance analysis allows a logical reduction of the number of estimated parameters for ill-posed inverse seawater intrusion problems. Ill-posed problems may exhibit poor estimation convergence, have a non-unique solution, have multiple minima, or require excessive computational effort, and the condition often occurs when estimating too many or co-dependent parameters. For the Henry problem, such analysis allows selection of the two parameters that control system physics from among all possible system parameters. ?? 2005 Elsevier Ltd. All rights reserved.

  20. Sensitivity analysis of the add-on price estimate for the edge-defined film-fed growth process

    NASA Technical Reports Server (NTRS)

    Mokashi, A. R.; Kachare, A. H.

    1981-01-01

    The analysis is in terms of cost parameters and production parameters. The cost parameters include equipment, space, direct labor, materials, and utilities. The production parameters include growth rate, process yield, and duty cycle. A computer program was developed specifically to do the sensitivity analysis.

  1. A cost analysis of implementing a behavioral weight loss intervention in community mental health settings: Results from the ACHIEVE trial.

    PubMed

    Janssen, Ellen M; Jerome, Gerald J; Dalcin, Arlene T; Gennusa, Joseph V; Goldsholl, Stacy; Frick, Kevin D; Wang, Nae-Yuh; Appel, Lawrence J; Daumit, Gail L

    2017-06-01

    In the ACHIEVE randomized controlled trial, an 18-month behavioral intervention accomplished weight loss in persons with serious mental illness who attended community psychiatric rehabilitation programs. This analysis estimates costs for delivering the intervention during the study. It also estimates expected costs to implement the intervention more widely in a range of community mental health programs. Using empirical data, costs were calculated from the perspective of a community psychiatric rehabilitation program delivering the intervention. Personnel and travel costs were calculated using time sheet data. Rent and supply costs were calculated using rent per square foot and intervention records. A univariate sensitivity analysis and an expert-informed sensitivity analysis were conducted. With 144 participants receiving the intervention and a mean weight loss of 3.4 kg, costs of $95 per participant per month and $501 per kilogram lost in the trial were calculated. In univariate sensitivity analysis, costs ranged from $402 to $725 per kilogram lost. Through expert-informed sensitivity analysis, it was estimated that rehabilitation programs could implement the intervention for $68 to $85 per client per month. Costs of implementing the ACHIEVE intervention were in the range of other intensive behavioral weight loss interventions. Wider implementation of efficacious lifestyle interventions in community mental health settings will require adequate funding mechanisms. © 2017 The Obesity Society.

  2. Spectrotemporal Modulation Sensitivity as a Predictor of Speech Intelligibility for Hearing-Impaired Listeners

    PubMed Central

    Bernstein, Joshua G.W.; Mehraei, Golbarg; Shamma, Shihab; Gallun, Frederick J.; Theodoroff, Sarah M.; Leek, Marjorie R.

    2014-01-01

    Background A model that can accurately predict speech intelligibility for a given hearing-impaired (HI) listener would be an important tool for hearing-aid fitting or hearing-aid algorithm development. Existing speech-intelligibility models do not incorporate variability in suprathreshold deficits that are not well predicted by classical audiometric measures. One possible approach to the incorporation of such deficits is to base intelligibility predictions on sensitivity to simultaneously spectrally and temporally modulated signals. Purpose The likelihood of success of this approach was evaluated by comparing estimates of spectrotemporal modulation (STM) sensitivity to speech intelligibility and to psychoacoustic estimates of frequency selectivity and temporal fine-structure (TFS) sensitivity across a group of HI listeners. Research Design The minimum modulation depth required to detect STM applied to an 86 dB SPL four-octave noise carrier was measured for combinations of temporal modulation rate (4, 12, or 32 Hz) and spectral modulation density (0.5, 1, 2, or 4 cycles/octave). STM sensitivity estimates for individual HI listeners were compared to estimates of frequency selectivity (measured using the notched-noise method at 500, 1000measured using the notched-noise method at 500, 2000, and 4000 Hz), TFS processing ability (2 Hz frequency-modulation detection thresholds for 500, 10002 Hz frequency-modulation detection thresholds for 500, 2000, and 4000 Hz carriers) and sentence intelligibility in noise (at a 0 dB signal-to-noise ratio) that were measured for the same listeners in a separate study. Study Sample Eight normal-hearing (NH) listeners and 12 listeners with a diagnosis of bilateral sensorineural hearing loss participated. Data Collection and Analysis STM sensitivity was compared between NH and HI listener groups using a repeated-measures analysis of variance. A stepwise regression analysis compared STM sensitivity for individual HI listeners to audiometric thresholds, age, and measures of frequency selectivity and TFS processing ability. A second stepwise regression analysis compared speech intelligibility to STM sensitivity and the audiogram-based Speech Intelligibility Index. Results STM detection thresholds were elevated for the HI listeners, but only for low rates and high densities. STM sensitivity for individual HI listeners was well predicted by a combination of estimates of frequency selectivity at 4000 Hz and TFS sensitivity at 500 Hz but was unrelated to audiometric thresholds. STM sensitivity accounted for an additional 40% of the variance in speech intelligibility beyond the 40% accounted for by the audibility-based Speech Intelligibility Index. Conclusions Impaired STM sensitivity likely results from a combination of a reduced ability to resolve spectral peaks and a reduced ability to use TFS information to follow spectral-peak movements. Combining STM sensitivity estimates with audiometric threshold measures for individual HI listeners provided a more accurate prediction of speech intelligibility than audiometric measures alone. These results suggest a significant likelihood of success for an STM-based model of speech intelligibility for HI listeners. PMID:23636210

  3. Behavior of sensitivities in the one-dimensional advection-dispersion equation: Implications for parameter estimation and sampling design

    USGS Publications Warehouse

    Knopman, Debra S.; Voss, Clifford I.

    1987-01-01

    The spatial and temporal variability of sensitivities has a significant impact on parameter estimation and sampling design for studies of solute transport in porous media. Physical insight into the behavior of sensitivities is offered through an analysis of analytically derived sensitivities for the one-dimensional form of the advection-dispersion equation. When parameters are estimated in regression models of one-dimensional transport, the spatial and temporal variability in sensitivities influences variance and covariance of parameter estimates. Several principles account for the observed influence of sensitivities on parameter uncertainty. (1) Information about a physical parameter may be most accurately gained at points in space and time with a high sensitivity to the parameter. (2) As the distance of observation points from the upstream boundary increases, maximum sensitivity to velocity during passage of the solute front increases and the consequent estimate of velocity tends to have lower variance. (3) The frequency of sampling must be “in phase” with the S shape of the dispersion sensitivity curve to yield the most information on dispersion. (4) The sensitivity to the dispersion coefficient is usually at least an order of magnitude less than the sensitivity to velocity. (5) The assumed probability distribution of random error in observations of solute concentration determines the form of the sensitivities. (6) If variance in random error in observations is large, trends in sensitivities of observation points may be obscured by noise and thus have limited value in predicting variance in parameter estimates among designs. (7) Designs that minimize the variance of one parameter may not necessarily minimize the variance of other parameters. (8) The time and space interval over which an observation point is sensitive to a given parameter depends on the actual values of the parameters in the underlying physical system.

  4. Dynamical Model of Drug Accumulation in Bacteria: Sensitivity Analysis and Experimentally Testable Predictions

    DOE PAGES

    Vesselinova, Neda; Alexandrov, Boian; Wall, Michael E.

    2016-11-08

    We present a dynamical model of drug accumulation in bacteria. The model captures key features in experimental time courses on ofloxacin accumulation: initial uptake; two-phase response; and long-term acclimation. In combination with experimental data, the model provides estimates of import and export rates in each phase, the time of entry into the second phase, and the decrease of internal drug during acclimation. Global sensitivity analysis, local sensitivity analysis, and Bayesian sensitivity analysis of the model provide information about the robustness of these estimates, and about the relative importance of different parameters in determining the features of the accumulation time coursesmore » in three different bacterial species: Escherichia coli, Staphylococcus aureus, and Pseudomonas aeruginosa. The results lead to experimentally testable predictions of the effects of membrane permeability, drug efflux and trapping (e.g., by DNA binding) on drug accumulation. A key prediction is that a sudden increase in ofloxacin accumulation in both E. coli and S. aureus is accompanied by a decrease in membrane permeability.« less

  5. Dynamical Model of Drug Accumulation in Bacteria: Sensitivity Analysis and Experimentally Testable Predictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vesselinova, Neda; Alexandrov, Boian; Wall, Michael E.

    We present a dynamical model of drug accumulation in bacteria. The model captures key features in experimental time courses on ofloxacin accumulation: initial uptake; two-phase response; and long-term acclimation. In combination with experimental data, the model provides estimates of import and export rates in each phase, the time of entry into the second phase, and the decrease of internal drug during acclimation. Global sensitivity analysis, local sensitivity analysis, and Bayesian sensitivity analysis of the model provide information about the robustness of these estimates, and about the relative importance of different parameters in determining the features of the accumulation time coursesmore » in three different bacterial species: Escherichia coli, Staphylococcus aureus, and Pseudomonas aeruginosa. The results lead to experimentally testable predictions of the effects of membrane permeability, drug efflux and trapping (e.g., by DNA binding) on drug accumulation. A key prediction is that a sudden increase in ofloxacin accumulation in both E. coli and S. aureus is accompanied by a decrease in membrane permeability.« less

  6. Perturbation analysis for patch occupancy dynamics

    USGS Publications Warehouse

    Martin, Julien; Nichols, James D.; McIntyre, Carol L.; Ferraz, Goncalo; Hines, James E.

    2009-01-01

    Perturbation analysis is a powerful tool to study population and community dynamics. This article describes expressions for sensitivity metrics reflecting changes in equilibrium occupancy resulting from small changes in the vital rates of patch occupancy dynamics (i.e., probabilities of local patch colonization and extinction). We illustrate our approach with a case study of occupancy dynamics of Golden Eagle (Aquila chrysaetos) nesting territories. Examination of the hypothesis of system equilibrium suggests that the system satisfies equilibrium conditions. Estimates of vital rates obtained using patch occupancy models are used to estimate equilibrium patch occupancy of eagles. We then compute estimates of sensitivity metrics and discuss their implications for eagle population ecology and management. Finally, we discuss the intuition underlying our sensitivity metrics and then provide examples of ecological questions that can be addressed using perturbation analyses. For instance, the sensitivity metrics lead to predictions about the relative importance of local colonization and local extinction probabilities in influencing equilibrium occupancy for rare and common species.

  7. Parameters Estimation For A Patellofemoral Joint Of A Human Knee Using A Vector Method

    NASA Astrophysics Data System (ADS)

    Ciszkiewicz, A.; Knapczyk, J.

    2015-08-01

    Position and displacement analysis of a spherical model of a human knee joint using the vector method was presented. Sensitivity analysis and parameter estimation were performed using the evolutionary algorithm method. Computer simulations for the mechanism with estimated parameters proved the effectiveness of the prepared software. The method itself can be useful when solving problems concerning the displacement and loads analysis in the knee joint.

  8. Monte Carlo simulation for slip rate sensitivity analysis in Cimandiri fault area

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pratama, Cecep, E-mail: great.pratama@gmail.com; Meilano, Irwan; Nugraha, Andri Dian

    Slip rate is used to estimate earthquake recurrence relationship which is the most influence for hazard level. We examine slip rate contribution of Peak Ground Acceleration (PGA), in probabilistic seismic hazard maps (10% probability of exceedance in 50 years or 500 years return period). Hazard curve of PGA have been investigated for Sukabumi using a PSHA (Probabilistic Seismic Hazard Analysis). We observe that the most influence in the hazard estimate is crustal fault. Monte Carlo approach has been developed to assess the sensitivity. Then, Monte Carlo simulations properties have been assessed. Uncertainty and coefficient of variation from slip rate formore » Cimandiri Fault area has been calculated. We observe that seismic hazard estimates is sensitive to fault slip rate with seismic hazard uncertainty result about 0.25 g. For specific site, we found seismic hazard estimate for Sukabumi is between 0.4904 – 0.8465 g with uncertainty between 0.0847 – 0.2389 g and COV between 17.7% – 29.8%.« less

  9. Performance analysis of structured gradient algorithm. [for adaptive beamforming linear arrays

    NASA Technical Reports Server (NTRS)

    Godara, Lal C.

    1990-01-01

    The structured gradient algorithm uses a structured estimate of the array correlation matrix (ACM) to estimate the gradient required for the constrained least-mean-square (LMS) algorithm. This structure reflects the structure of the exact array correlation matrix for an equispaced linear array and is obtained by spatial averaging of the elements of the noisy correlation matrix. In its standard form the LMS algorithm does not exploit the structure of the array correlation matrix. The gradient is estimated by multiplying the array output with the receiver outputs. An analysis of the two algorithms is presented to show that the covariance of the gradient estimated by the structured method is less sensitive to the look direction signal than that estimated by the standard method. The effect of the number of elements on the signal sensitivity of the two algorithms is studied.

  10. Impact of the time scale of model sensitivity response on coupled model parameter estimation

    NASA Astrophysics Data System (ADS)

    Liu, Chang; Zhang, Shaoqing; Li, Shan; Liu, Zhengyu

    2017-11-01

    That a model has sensitivity responses to parameter uncertainties is a key concept in implementing model parameter estimation using filtering theory and methodology. Depending on the nature of associated physics and characteristic variability of the fluid in a coupled system, the response time scales of a model to parameters can be different, from hourly to decadal. Unlike state estimation, where the update frequency is usually linked with observational frequency, the update frequency for parameter estimation must be associated with the time scale of the model sensitivity response to the parameter being estimated. Here, with a simple coupled model, the impact of model sensitivity response time scales on coupled model parameter estimation is studied. The model includes characteristic synoptic to decadal scales by coupling a long-term varying deep ocean with a slow-varying upper ocean forced by a chaotic atmosphere. Results show that, using the update frequency determined by the model sensitivity response time scale, both the reliability and quality of parameter estimation can be improved significantly, and thus the estimated parameters make the model more consistent with the observation. These simple model results provide a guideline for when real observations are used to optimize the parameters in a coupled general circulation model for improving climate analysis and prediction initialization.

  11. The application of sensitivity analysis to models of large scale physiological systems

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1974-01-01

    A survey of the literature of sensitivity analysis as it applies to biological systems is reported as well as a brief development of sensitivity theory. A simple population model and a more complex thermoregulatory model illustrate the investigatory techniques and interpretation of parameter sensitivity analysis. The role of sensitivity analysis in validating and verifying models, and in identifying relative parameter influence in estimating errors in model behavior due to uncertainty in input data is presented. This analysis is valuable to the simulationist and the experimentalist in allocating resources for data collection. A method for reducing highly complex, nonlinear models to simple linear algebraic models that could be useful for making rapid, first order calculations of system behavior is presented.

  12. Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks

    PubMed Central

    Kaltenbacher, Barbara; Hasenauer, Jan

    2017-01-01

    Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation (ODE) models has improved our understanding of small- and medium-scale biological processes. While the same should in principle hold for large- and genome-scale processes, the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far. While individual simulations are feasible, the inference of the model parameters from experimental data is computationally too intensive. In this manuscript, we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks. We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology. Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis. The computational complexity is effectively independent of the number of parameters, enabling the analysis of large- and genome-scale models. Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods. The proposed method will facilitate mechanistic modeling of genome-scale cellular processes, as required in the age of omics. PMID:28114351

  13. Sensitivity of combustion and ignition characteristics of the solid-fuel charge of the microelectromechanical system of a microthruster to macrokinetic and design parameters

    NASA Astrophysics Data System (ADS)

    Futko, S. I.; Ermolaeva, E. M.; Dobrego, K. V.; Bondarenko, V. P.; Dolgii, L. N.

    2012-07-01

    We have developed a sensitivity analysis permitting effective estimation of the change in the impulse responses of a microthrusters and in the ignition characteristics of the solid-fuel charge caused by the variation of the basic macrokinetic parameters of the mixed fuel and the design parameters of the microthruster's combustion chamber. On the basis of the proposed sensitivity analysis, we have estimated the spread of both the propulsive force and impulse and the induction period and self-ignition temperature depending on the macrokinetic parameters of combustion (pre-exponential factor, activation energy, density, and heat content) of the solid-fuel charge of the microthruster. The obtained results can be used for rapid and effective estimation of the spread of goal functions to provide stable physicochemical characteristics and impulse responses of solid-fuel mixtures in making and using microthrusters.

  14. Mechanical energy estimation during walking: validity and sensitivity in typical gait and in children with cerebral palsy.

    PubMed

    Van de Walle, P; Hallemans, A; Schwartz, M; Truijen, S; Gosselink, R; Desloovere, K

    2012-02-01

    Gait efficiency in children with cerebral palsy is usually quantified by metabolic energy expenditure. Mechanical energy estimations, however, can be a valuable supplement as they can be assessed during gait analysis and plotted over the gait cycle, thus revealing information on timing and sources of increases in energy expenditure. Unfortunately, little information on validity and sensitivity exists. Three mechanical estimation approaches: (1) centre of mass (CoM) approach, (2) sum of segmental energies (SSE) approach and (3) integrated joint power approach, were validated against oxygen consumption and each other. Sensitivity was assessed in typical gait and in children with diplegia. CoM approach underestimated total energy expenditure and showed poor sensitivity. SSE approach overestimated energy expenditure and showed acceptable sensitivity. Validity and sensitivity were best in the integrated joint power approach. This method is therefore preferred for mechanical energy estimation in children with diplegia. However, mechanical energy should supplement, not replace metabolic energy, as total energy expended is not captured in any mechanical approach. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Prevalence Estimation and Validation of New Instruments in Psychiatric Research: An Application of Latent Class Analysis and Sensitivity Analysis

    ERIC Educational Resources Information Center

    Pence, Brian Wells; Miller, William C.; Gaynes, Bradley N.

    2009-01-01

    Prevalence and validation studies rely on imperfect reference standard (RS) diagnostic instruments that can bias prevalence and test characteristic estimates. The authors illustrate 2 methods to account for RS misclassification. Latent class analysis (LCA) combines information from multiple imperfect measures of an unmeasurable latent condition to…

  16. Urinary neutrophil gelatinase-associated lipocalin for diagnosis and estimating activity in lupus nephritis: a meta-analysis.

    PubMed

    Fang, Y G; Chen, N N; Cheng, Y B; Sun, S J; Li, H X; Sun, F; Xiang, Y

    2015-12-01

    Urinary neutrophil gelatinase-associated lipocalin (uNGAL) is relatively specific in lupus nephritis (LN) patients. However, its diagnostic value has not been evaluated. The aim of this review was to determine the value of uNGAL for diagnosis and estimating activity in LN. A comprehensive search was performed on PubMed, EMBASE, Web of Knowledge, Cochrane electronic databases through December 2014. Meta-analysis of sensitivity and specificity was performed with a random-effects model. Additionally, summary receiver operating characteristic (SROC) curves and area under the curve (AUC) values were calculated. Fourteen studies were selected for this review. With respect to diagnosing LN, the pooled sensitivity and specificity were 73.6% (95% confidence interval (CI), 61.9-83.3) and 78.1% (95% CI, 69.0-85.6), respectively. The SROC-AUC value was 0.8632. Regarding estimating LN activity, the pooled sensitivity and specificity were 66.2% (95% CI, 60.4-71.7) and 62.1% (95% CI, 57.9-66.3), respectively. The SROC-AUC value was 0.7583. In predicting renal flares, the pooled sensitivity and specificity were 77.5% (95% CI, 68.1-85.1) and 65.3% (95% CI, 60.0-70.3), respectively. The SROC-AUC value was 0.7756. In conclusion, this meta-analysis indicates that uNGAL has relatively fair sensitivity and specificity in diagnosing LN, estimating LN activity and predicting renal flares, suggesting that uNGAL is a potential biomarker in diagnosing LN and monitoring LN activity. © The Author(s) 2015.

  17. An investigation of using an RQP based method to calculate parameter sensitivity derivatives

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1989-01-01

    Estimation of the sensitivity of problem functions with respect to problem variables forms the basis for many of our modern day algorithms for engineering optimization. The most common application of problem sensitivities has been in the calculation of objective function and constraint partial derivatives for determining search directions and optimality conditions. A second form of sensitivity analysis, parameter sensitivity, has also become an important topic in recent years. By parameter sensitivity, researchers refer to the estimation of changes in the modeling functions and current design point due to small changes in the fixed parameters of the formulation. Methods for calculating these derivatives have been proposed by several authors (Armacost and Fiacco 1974, Sobieski et al 1981, Schmit and Chang 1984, and Vanderplaats and Yoshida 1985). Two drawbacks to estimating parameter sensitivities by current methods have been: (1) the need for second order information about the Lagrangian at the current point, and (2) the estimates assume no change in the active set of constraints. The first of these two problems is addressed here and a new algorithm is proposed that does not require explicit calculation of second order information.

  18. Computing sensitivity and selectivity in parallel factor analysis and related multiway techniques: the need for further developments in net analyte signal theory.

    PubMed

    Olivieri, Alejandro C

    2005-08-01

    Sensitivity and selectivity are important figures of merit in multiway analysis, regularly employed for comparison of the analytical performance of methods and for experimental design and planning. They are especially interesting in the second-order advantage scenario, where the latter property allows for the analysis of samples with a complex background, permitting analyte determination even in the presence of unsuspected interferences. Since no general theory exists for estimating the multiway sensitivity, Monte Carlo numerical calculations have been developed for estimating variance inflation factors, as a convenient way of assessing both sensitivity and selectivity parameters for the popular parallel factor (PARAFAC) analysis and also for related multiway techniques. When the second-order advantage is achieved, the existing expressions derived from net analyte signal theory are only able to adequately cover cases where a single analyte is calibrated using second-order instrumental data. However, they fail for certain multianalyte cases, or when third-order data are employed, calling for an extension of net analyte theory. The results have strong implications in the planning of multiway analytical experiments.

  19. The propagation of wind errors through ocean wave hindcasts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holthuijsen, L.H.; Booij, N.; Bertotti, L.

    1996-08-01

    To estimate uncertainties in wave forecast and hindcasts, computations have been carried out for a location in the Mediterranean Sea using three different analyses of one historic wind field. These computations involve a systematic sensitivity analysis and estimated wind field errors. This technique enables a wave modeler to estimate such uncertainties in other forecasts and hindcasts if only one wind analysis is available.

  20. Systematic parameter estimation and sensitivity analysis using a multidimensional PEMFC model coupled with DAKOTA.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Chao Yang; Luo, Gang; Jiang, Fangming

    2010-05-01

    Current computational models for proton exchange membrane fuel cells (PEMFCs) include a large number of parameters such as boundary conditions, material properties, and numerous parameters used in sub-models for membrane transport, two-phase flow and electrochemistry. In order to successfully use a computational PEMFC model in design and optimization, it is important to identify critical parameters under a wide variety of operating conditions, such as relative humidity, current load, temperature, etc. Moreover, when experimental data is available in the form of polarization curves or local distribution of current and reactant/product species (e.g., O2, H2O concentrations), critical parameters can be estimated inmore » order to enable the model to better fit the data. Sensitivity analysis and parameter estimation are typically performed using manual adjustment of parameters, which is also common in parameter studies. We present work to demonstrate a systematic approach based on using a widely available toolkit developed at Sandia called DAKOTA that supports many kinds of design studies, such as sensitivity analysis as well as optimization and uncertainty quantification. In the present work, we couple a multidimensional PEMFC model (which is being developed, tested and later validated in a joint effort by a team from Penn State Univ. and Sandia National Laboratories) with DAKOTA through the mapping of model parameters to system responses. Using this interface, we demonstrate the efficiency of performing simple parameter studies as well as identifying critical parameters using sensitivity analysis. Finally, we show examples of optimization and parameter estimation using the automated capability in DAKOTA.« less

  1. On Sensitivity Analysis within the 4DVAR Framework

    DTIC Science & Technology

    2014-02-01

    sitivity’’ (AS) approach, Lee et al. (2001) estimated the sensitivity of the Indonesian Throughflow to remote wind forcing, Losch and Heimbach ( 2007 ...of massive paral- lelization. The ensemble sensitivity (ES) analysis (e.g., Ancell and Hakim 2007 ; Torn and Hakim 2008) follows the basic principle of...variational assimila- tion techniques (e.g., Cao et al. 2007 ; Liu et al. 2008; Yaremchuk et al. 2009; Clayton et al. 2013). In particular, Yaremchuk

  2. Meta-epidemiologic study showed frequent time trends in summary estimates from meta-analyses of diagnostic accuracy studies.

    PubMed

    Cohen, Jérémie F; Korevaar, Daniël A; Wang, Junfeng; Leeflang, Mariska M; Bossuyt, Patrick M

    2016-09-01

    To evaluate changes over time in summary estimates from meta-analyses of diagnostic accuracy studies. We included 48 meta-analyses from 35 MEDLINE-indexed systematic reviews published between September 2011 and January 2012 (743 diagnostic accuracy studies; 344,015 participants). Within each meta-analysis, we ranked studies by publication date. We applied random-effects cumulative meta-analysis to follow how summary estimates of sensitivity and specificity evolved over time. Time trends were assessed by fitting a weighted linear regression model of the summary accuracy estimate against rank of publication. The median of the 48 slopes was -0.02 (-0.08 to 0.03) for sensitivity and -0.01 (-0.03 to 0.03) for specificity. Twelve of 96 (12.5%) time trends in sensitivity or specificity were statistically significant. We found a significant time trend in at least one accuracy measure for 11 of the 48 (23%) meta-analyses. Time trends in summary estimates are relatively frequent in meta-analyses of diagnostic accuracy studies. Results from early meta-analyses of diagnostic accuracy studies should be considered with caution. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Thermodynamic modeling of transcription: sensitivity analysis differentiates biological mechanism from mathematical model-induced effects.

    PubMed

    Dresch, Jacqueline M; Liu, Xiaozhou; Arnosti, David N; Ay, Ahmet

    2010-10-24

    Quantitative models of gene expression generate parameter values that can shed light on biological features such as transcription factor activity, cooperativity, and local effects of repressors. An important element in such investigations is sensitivity analysis, which determines how strongly a model's output reacts to variations in parameter values. Parameters of low sensitivity may not be accurately estimated, leading to unwarranted conclusions. Low sensitivity may reflect the nature of the biological data, or it may be a result of the model structure. Here, we focus on the analysis of thermodynamic models, which have been used extensively to analyze gene transcription. Extracted parameter values have been interpreted biologically, but until now little attention has been given to parameter sensitivity in this context. We apply local and global sensitivity analyses to two recent transcriptional models to determine the sensitivity of individual parameters. We show that in one case, values for repressor efficiencies are very sensitive, while values for protein cooperativities are not, and provide insights on why these differential sensitivities stem from both biological effects and the structure of the applied models. In a second case, we demonstrate that parameters that were thought to prove the system's dependence on activator-activator cooperativity are relatively insensitive. We show that there are numerous parameter sets that do not satisfy the relationships proferred as the optimal solutions, indicating that structural differences between the two types of transcriptional enhancers analyzed may not be as simple as altered activator cooperativity. Our results emphasize the need for sensitivity analysis to examine model construction and forms of biological data used for modeling transcriptional processes, in order to determine the significance of estimated parameter values for thermodynamic models. Knowledge of parameter sensitivities can provide the necessary context to determine how modeling results should be interpreted in biological systems.

  4. Analysis of the moments of the sensitivity function for resistivity over a homogeneous half-space: Rules of thumb for pseudoposition, offline sensitivity and resolution

    NASA Astrophysics Data System (ADS)

    Butler, S. L.

    2017-08-01

    It is instructive to consider the sensitivity function for a homogeneous half space for resistivity since it has a simple mathematical formula and it does not require a priori knowledge of the resistivity of the ground. Past analyses of this function have allowed visualization of the regions that contribute most to apparent resistivity measurements with given array configurations. The horizontally integrated form of this equation gives the sensitivity function for an infinitesimally thick horizontal slab with a small resistivity contrast and analysis of this function has admitted estimates of the depth of investigation for a given electrode array. Recently, it has been shown that the average of the vertical coordinate over this function yields a simple formula that can be used to estimate the depth of investigation. The sensitivity function for a vertical inline slab has also been previously calculated. In this contribution, I show that the sensitivity function for a homogeneous half-space can also be integrated so as to give sensitivity functions to semi-infinite vertical slabs that are perpendicular to the array axis. These horizontal sensitivity functions can, in turn, be integrated over the spatial coordinates to give the mean horizontal positions of the sensitivity functions. The mean horizontal positions give estimates for the centres of the regions that affect apparent resistivity measurements for arbitrary array configuration and can be used as horizontal positions when plotting pseudosections even for non-collinear arrays. The mean of the horizontal coordinate that is perpendicular to a collinear array also gives a simple formula for estimating the distance over which offline resistivity anomalies will have a significant effect. The root mean square (rms) widths of the sensitivity functions are also calculated in each of the coordinate directions as an estimate of the inverse of the resolution of a given array. For depth and in the direction perpendicular to the array, the rms thickness is shown to be very similar to the mean distance. For the direction parallel to the array, the rms thickness is shown to be proportional to the array length and similar to the array length divided by 2 for many arrays. I expect that these formulas will provide useful rules of thumb for estimating the centres and extents of regions influencing apparent resistivity measurements for survey planning and for education.

  5. Bayesian estimation of the sensitivity and specificity of individual fecal culture and Paralisa to detect Mycobacterium avium subspecies paratuberculosis infection in young farmed deer.

    PubMed

    Stringer, Lesley A; Jones, Geoff; Jewell, Chris P; Noble, Alasdair D; Heuer, Cord; Wilson, Peter R; Johnson, Wesley O

    2013-11-01

    A Bayesian latent class model was used to estimate the sensitivity and specificity of an immunoglobulin G1 serum enzyme-linked immunosorbent assay (Paralisa) and individual fecal culture to detect young deer infected with Mycobacterium avium subsp. paratuberculosis. Paired fecal and serum samples were collected, between July 2009 and April 2010, from 20 individual yearling (12-24-month-old) deer in each of 20 South Island and 18 North Island herds in New Zealand and subjected to culture and Paralisa, respectively. Two fecal samples and 16 serum samples from 356 North Island deer, and 55 fecal and 37 serum samples from 401 South Island deer, were positive. The estimate of individual fecal culture sensitivity was 77% (95% credible interval [CI] = 61-92%) with specificity of 99% (95% CI = 98-99.7%). The Paralisa sensitivity estimate was 19% (95% CI = 10-30%), with specificity of 94% (95% CI = 93-96%). All estimates were robust to variation of priors and assumptions tested in a sensitivity analysis. The data informs the use of the tests in determining infection status at the individual and herd level.

  6. Accuracy and sensitivity analysis on seismic anisotropy parameter estimation

    NASA Astrophysics Data System (ADS)

    Yan, Fuyong; Han, De-Hua

    2018-04-01

    There is significant uncertainty in measuring the Thomsen’s parameter δ in laboratory even though the dimensions and orientations of the rock samples are known. It is expected that more challenges will be encountered in the estimating of the seismic anisotropy parameters from field seismic data. Based on Monte Carlo simulation of vertical transversely isotropic layer cake model using the database of laboratory anisotropy measurement from the literature, we apply the commonly used quartic non-hyperbolic reflection moveout equation to estimate the seismic anisotropy parameters and test its accuracy and sensitivities to the source-receive offset, vertical interval velocity error and time picking error. The testing results show that the methodology works perfectly for noise-free synthetic data with short spread length. However, this method is extremely sensitive to the time picking error caused by mild random noises, and it requires the spread length to be greater than the depth of the reflection event. The uncertainties increase rapidly for the deeper layers and the estimated anisotropy parameters can be very unreliable for a layer with more than five overlain layers. It is possible that an isotropic formation can be misinterpreted as a strong anisotropic formation. The sensitivity analysis should provide useful guidance on how to group the reflection events and build a suitable geological model for anisotropy parameter inversion.

  7. Efficiency in the Community College Sector: Stochastic Frontier Analysis

    ERIC Educational Resources Information Center

    Agasisti, Tommaso; Belfield, Clive

    2017-01-01

    This paper estimates technical efficiency scores across the community college sector in the United States. Using stochastic frontier analysis and data from the Integrated Postsecondary Education Data System for 2003-2010, we estimate efficiency scores for 950 community colleges and perform a series of sensitivity tests to check for robustness. We…

  8. Normalized sensitivities and parameter identifiability of in situ diffusion experiments on Callovo Oxfordian clay at Bure site

    NASA Astrophysics Data System (ADS)

    Samper, J.; Dewonck, S.; Zheng, L.; Yang, Q.; Naves, A.

    Diffusion of inert and reactive tracers (DIR) is an experimental program performed by ANDRA at Bure underground research laboratory in Meuse/Haute Marne (France) to characterize diffusion and retention of radionuclides in Callovo-Oxfordian (C-Ox) argillite. In situ diffusion experiments were performed in vertical boreholes to determine diffusion and retention parameters of selected radionuclides. C-Ox clay exhibits a mild diffusion anisotropy due to stratification. Interpretation of in situ diffusion experiments is complicated by several non-ideal effects caused by the presence of a sintered filter, a gap between the filter and borehole wall and an excavation disturbed zone (EdZ). The relevance of such non-ideal effects and their impact on estimated clay parameters have been evaluated with numerical sensitivity analyses and synthetic experiments having similar parameters and geometric characteristics as real DIR experiments. Normalized dimensionless sensitivities of tracer concentrations at the test interval have been computed numerically. Tracer concentrations are found to be sensitive to all key parameters. Sensitivities are tracer dependent and vary with time. These sensitivities are useful to identify which are the parameters that can be estimated with less uncertainty and find the times at which tracer concentrations begin to be sensitive to each parameter. Synthetic experiments generated with prescribed known parameters have been interpreted automatically with INVERSE-CORE 2D and used to evaluate the relevance of non-ideal effects and ascertain parameter identifiability in the presence of random measurement errors. Identifiability analysis of synthetic experiments reveals that data noise makes difficult the estimation of clay parameters. Parameters of clay and EdZ cannot be estimated simultaneously from noisy data. Models without an EdZ fail to reproduce synthetic data. Proper interpretation of in situ diffusion experiments requires accounting for filter, gap and EdZ. Estimates of the effective diffusion coefficient and the porosity of clay are highly correlated, indicating that these parameters cannot be estimated simultaneously. Accurate estimation of De and porosities of clay and EdZ is only possible when the standard deviation of random noise is less than 0.01. Small errors in the volume of the circulation system do not affect clay parameter estimates. Normalized sensitivities as well as the identifiability analysis of synthetic experiments provide additional insight on inverse estimation of in situ diffusion experiments and will be of great benefit for the interpretation of real DIR in situ diffusion experiments.

  9. Diagnostic performance of matrix-assisted laser desorption ionisation time-of-flight mass spectrometry in blood bacterial infections: a systematic review and meta-analysis.

    PubMed

    Scott, Jamie S; Sterling, Sarah A; To, Harrison; Seals, Samantha R; Jones, Alan E

    2016-07-01

    Matrix-assisted laser desorption ionisation time-of-flight mass spectrometry (MALDI-TOF MS) has shown promise in decreasing time to identification of causative organisms compared to traditional methods; however, the utility of MALDI-TOF MS in a heterogeneous clinical setting is uncertain. To perform a systematic review on the operational performance of the Bruker MALDI-TOF MS system and evaluate published cut-off values compared to traditional blood cultures. A comprehensive literature search was performed. Studies were included if they performed direct MALDI-TOF MS analysis of blood culture specimens in human patients with suspected bacterial infections using the Bruker Biotyper software. Sensitivities and specificities of the combined studies were estimated using a hierarchical random effects linear model (REML) incorporating cut-off scores of ≥1.7 and ≥2.0. Fifty publications were identified, with 11 studies included after final review. The estimated sensitivity utilising a cut-off of ≥2.0 from the combined studies was 74.6% (95% CI = 67.9-89.3%), with an estimated specificity of 88.0% (95% CI = 74.8-94.7%). When assessing a cut-off of ≥1.7, the combined sensitivity increases to 92.8% (95% CI = 87.4-96.0%), but the estimated specificity decreased to 81.2% (95% CI = 61.9-96.6%). In this analysis, MALDI-TOF MS showed acceptable sensitivity and specificity in bacterial speciation with the current recommended cut-off point compared to blood cultures; however, lowering the cut-off point from ≥2.0 to ≥1.7 would increase the sensitivity of the test without significant detrimental effect on the specificity, which could improve clinician confidence in their results.

  10. Basal measures of insulin sensitivity and insulin secretion and simplified glucose tolerance tests in dogs.

    PubMed

    Verkest, K R; Fleeman, L M; Rand, J S; Morton, J M

    2010-10-01

    There is need for simple, inexpensive measures of glucose tolerance, insulin sensitivity, and insulin secretion in dogs. The aim of this study was to estimate the closeness of correlation between fasting and dynamic measures of insulin sensitivity and insulin secretion, the precision of fasting measures, and the agreement between results of standard and simplified glucose tolerance tests in dogs. A retrospective descriptive study using 6 naturally occurring obese and 6 lean dogs was conducted. Data from frequently sampled intravenous glucose tolerance tests (FSIGTTs) in 6 obese and 6 lean client-owned dogs were used to calculate HOMA, QUICKI, fasting glucose and insulin concentrations. Fasting measures of insulin sensitivity and secretion were compared with MINMOD analysis of FSIGTTs using Pearson correlation coefficients, and they were evaluated for precision by the discriminant ratio. Simplified sampling protocols were compared with standard FSIGTTs using Lin's concordance correlation coefficients, limits of agreement, and Pearson correlation coefficients. All fasting measures except fasting plasma glucose concentration were moderately correlated with MINMOD-estimated insulin sensitivity (|r| = 0.62-0.80; P < 0.03), and those that combined fasting insulin and glucose were moderately closely correlated with MINMOD-estimated insulin secretion (r = 0.60-0.79; P < 0.04). HOMA calculated using the nonlinear formulae had the closest estimated correlation (r = 0.77 and 0.74) and the best discrimination for insulin sensitivity and insulin secretion (discriminant ratio 4.4 and 3.4, respectively). Simplified sampling protocols with half as many samples collected over 3 h had close agreement with the full sampling protocol. Fasting measures and simplified intravenous glucose tolerance tests reflect insulin sensitivity and insulin secretion derived from frequently sampled glucose tolerance tests with MINMOD analysis in dogs. Copyright 2010 Elsevier Inc. All rights reserved.

  11. A preliminary cost-effectiveness analysis of hepatitis E vaccination among pregnant women in epidemic regions.

    PubMed

    Zhao, Yueyuan; Zhang, Xuefeng; Zhu, Fengcai; Jin, Hui; Wang, Bei

    2016-08-02

    Objective To estimate the cost-effectiveness of hepatitis E vaccination among pregnant women in epidemic regions. Methods A decision tree model was constructed to evaluate the cost-effectiveness of 3 hepatitis E virus vaccination strategies from societal perspectives. The model parameters were estimated on the basis of published studies and experts' experience. Sensitivity analysis was used to evaluate the uncertainties of the model. Results Vaccination was more economically effective on the basis of the incremental cost-effectiveness ratio (ICER< 3 times China's per capital gross domestic product/quality-adjusted life years); moreover, screening and vaccination had higher QALYs and lower costs compared with universal vaccination. No parameters significantly impacted ICER in one-way sensitivity analysis, and probabilistic sensitivity analysis also showed screening and vaccination to be the dominant strategy. Conclusion Screening and vaccination is the most economical strategy for pregnant women in epidemic regions; however, further studies are necessary to confirm the efficacy and safety of the hepatitis E vaccines.

  12. B-value and slip rate sensitivity analysis for PGA value in Lembang fault and Cimandiri fault area

    NASA Astrophysics Data System (ADS)

    Pratama, Cecep; Ito, Takeo; Meilano, Irwan; Nugraha, Andri Dian

    2017-07-01

    We examine slip rate and b-value contribution of Peak Ground Acceleration (PGA), in probabilistic seismic hazard maps (10% probability of exceedence in 50 years or 500 years return period). Hazard curve of PGA have been investigated for Sukabumi and Bandung using a PSHA (Probabilistic Seismic Hazard Analysis). We observe that the most influence in the hazard estimate is crustal fault. Monte Carlo approach has been developed to assess the sensitivity. Uncertainty and coefficient of variation from slip rate and b-value in Lembang and Cimandiri Fault area have been calculated. We observe that seismic hazard estimates are sensitive to fault slip rate and b-value with uncertainty result are 0.25 g dan 0.1-0.2 g, respectively. For specific site, we found seismic hazard estimate are 0.49 + 0.13 g with COV 27% and 0.39 + 0.05 g with COV 13% for Sukabumi and Bandung, respectively.

  13. Eigenvalue Contributon Estimator for Sensitivity Calculations with TSUNAMI-3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T; Williams, Mark L

    2007-01-01

    Since the release of the Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) codes in SCALE [1], the use of sensitivity and uncertainty analysis techniques for criticality safety applications has greatly increased within the user community. In general, sensitivity and uncertainty analysis is transitioning from a technique used only by specialists to a practical tool in routine use. With the desire to use the tool more routinely comes the need to improve the solution methodology to reduce the input and computational burden on the user. This paper reviews the current solution methodology of the Monte Carlo eigenvalue sensitivity analysismore » sequence TSUNAMI-3D, describes an alternative approach, and presents results from both methodologies.« less

  14. Sensitivity and specificity of the Streptococcus pneumoniae urinary antigen test for unconcentrated urine from adult patients with pneumonia: a meta-analysis.

    PubMed

    Horita, Nobuyuki; Miyazawa, Naoki; Kojima, Ryota; Kimura, Naoko; Inoue, Miyo; Ishigatsubo, Yoshiaki; Kaneko, Takeshi

    2013-11-01

    Studies on the sensitivity and specificity of the Binax Now Streptococcus pneumonia urinary antigen test (index test) show considerable variance of results. Those written in English provided sufficient original data to evaluate the sensitivity and specificity of the index test using unconcentrated urine to identify S. pneumoniae infection in adults with pneumonia. Reference tests were conducted with at least one culture and/or smear. We estimated sensitivity and two specificities. One was the specificity evaluated using only patients with pneumonia of identified other aetiologies ('specificity (other)'). The other was the specificity evaluated based on both patients with pneumonia of unknown aetiology and those with pneumonia of other aetiologies ('specificity (unknown and other)') using a fixed model for meta-analysis. We found 10 articles involving 2315 patients. The analysis of 10 studies involving 399 patients yielded a pooled sensitivity of 0.75 (95% confidence interval: 0.71-0.79) without heterogeneity or publication bias. The analysis of six studies involving 258 patients yielded a pooled specificity (other) of 0.95 (95% confidence interval: 0.92-0.98) without no heterogeneity or publication bias. We attempted to conduct a meta-analysis with the 10 studies involving 1916 patients to estimate specificity (unknown and other), but it remained unclear due to moderate heterogeneity and possible publication bias. In our meta-analysis, sensitivity of the index test was moderate and specificity (other) was high; however, the specificity (unknown and other) remained unclear. © 2013 The Authors. Respirology © 2013 Asian Pacific Society of Respirology.

  15. Sensitivity analysis of the near-road dispersion model RLINE - An evaluation at Detroit, Michigan

    NASA Astrophysics Data System (ADS)

    Milando, Chad W.; Batterman, Stuart A.

    2018-05-01

    The development of accurate and appropriate exposure metrics for health effect studies of traffic-related air pollutants (TRAPs) remains challenging and important given that traffic has become the dominant urban exposure source and that exposure estimates can affect estimates of associated health risk. Exposure estimates obtained using dispersion models can overcome many of the limitations of monitoring data, and such estimates have been used in several recent health studies. This study examines the sensitivity of exposure estimates produced by dispersion models to meteorological, emission and traffic allocation inputs, focusing on applications to health studies examining near-road exposures to TRAP. Daily average concentrations of CO and NOx predicted using the Research Line source model (RLINE) and a spatially and temporally resolved mobile source emissions inventory are compared to ambient measurements at near-road monitoring sites in Detroit, MI, and are used to assess the potential for exposure measurement error in cohort and population-based studies. Sensitivity of exposure estimates is assessed by comparing nominal and alternative model inputs using statistical performance evaluation metrics and three sets of receptors. The analysis shows considerable sensitivity to meteorological inputs; generally the best performance was obtained using data specific to each monitoring site. An updated emission factor database provided some improvement, particularly at near-road sites, while the use of site-specific diurnal traffic allocations did not improve performance compared to simpler default profiles. Overall, this study highlights the need for appropriate inputs, especially meteorological inputs, to dispersion models aimed at estimating near-road concentrations of TRAPs. It also highlights the potential for systematic biases that might affect analyses that use concentration predictions as exposure measures in health studies.

  16. Sensitivity Analysis of Down Woody Material Data Processing Routines

    Treesearch

    Christopher W. Woodall; Duncan C. Lutes

    2005-01-01

    Weight per unit area (load) estimates of Down Woody Material (DWM) are the most common requests by users of the USDA Forest Service's Forest Inventory and Analysis (FIA) program's DWM inventory. Estimating of DWM loads requires the uniform compilation of DWM transect data for the entire United States. DWM weights may vary by species, level of decay, woody...

  17. A Sensitivity Analysis of a Map of Habitat Quality for the California Spotted Owl (Strix occidentalis occidentalis) in southern California

    Treesearch

    Ellen M. Hines; Janet Franklin

    1997-01-01

    Using a Geographic Information System (GIS), a sensitivity analysis was performed on estimated mapping errors in vegetation type, forest canopy cover percentage, and tree crown size to determine the possible effects error in these data might have on delineating suitable habitat for the California Spotted Owl (Strix occidentalis occidentalis) in...

  18. Nonindependence and sensitivity analyses in ecological and evolutionary meta-analyses.

    PubMed

    Noble, Daniel W A; Lagisz, Malgorzata; O'dea, Rose E; Nakagawa, Shinichi

    2017-05-01

    Meta-analysis is an important tool for synthesizing research on a variety of topics in ecology and evolution, including molecular ecology, but can be susceptible to nonindependence. Nonindependence can affect two major interrelated components of a meta-analysis: (i) the calculation of effect size statistics and (ii) the estimation of overall meta-analytic estimates and their uncertainty. While some solutions to nonindependence exist at the statistical analysis stages, there is little advice on what to do when complex analyses are not possible, or when studies with nonindependent experimental designs exist in the data. Here we argue that exploring the effects of procedural decisions in a meta-analysis (e.g. inclusion of different quality data, choice of effect size) and statistical assumptions (e.g. assuming no phylogenetic covariance) using sensitivity analyses are extremely important in assessing the impact of nonindependence. Sensitivity analyses can provide greater confidence in results and highlight important limitations of empirical work (e.g. impact of study design on overall effects). Despite their importance, sensitivity analyses are seldom applied to problems of nonindependence. To encourage better practice for dealing with nonindependence in meta-analytic studies, we present accessible examples demonstrating the impact that ignoring nonindependence can have on meta-analytic estimates. We also provide pragmatic solutions for dealing with nonindependent study designs, and for analysing dependent effect sizes. Additionally, we offer reporting guidelines that will facilitate disclosure of the sources of nonindependence in meta-analyses, leading to greater transparency and more robust conclusions. © 2017 John Wiley & Sons Ltd.

  19. Adjoint sensitivity analysis of plasmonic structures using the FDTD method.

    PubMed

    Zhang, Yu; Ahmed, Osman S; Bakr, Mohamed H

    2014-05-15

    We present an adjoint variable method for estimating the sensitivities of arbitrary responses with respect to the parameters of dispersive discontinuities in nanoplasmonic devices. Our theory is formulated in terms of the electric field components at the vicinity of perturbed discontinuities. The adjoint sensitivities are computed using at most one extra finite-difference time-domain (FDTD) simulation regardless of the number of parameters. Our approach is illustrated through the sensitivity analysis of an add-drop coupler consisting of a square ring resonator between two parallel waveguides. The computed adjoint sensitivities of the scattering parameters are compared with those obtained using the accurate but computationally expensive central finite difference approach.

  20. Inversed estimation of critical factors for controlling over-prediction of summertime tropospheric O3 over East Asia based of the combination of DDM sensitivity analysis and modeled Green's function method

    NASA Astrophysics Data System (ADS)

    Itahashi, S.; Yumimoto, K.; Uno, I.; Kim, S.

    2012-12-01

    Air quality studies based on the chemical transport model have been provided many important results for promoting our knowledge of air pollution phenomena, however, discrepancies between modeling results and observation data are still important issue to overcome. One of the concerning issue would be an over-prediction of summertime tropospheric ozone in remote area of Japan. This problem has been pointed out in the model comparison study of both regional scale (e.g., MICS-Asia) and global scale model (e.g., TH-FTAP). Several reasons for this issue can be listed as, (i) the modeled reproducibility on the penetration of clean oceanic air mass, (ii) correct estimation of the anthropogenic NOx / VOC emissions over East Asia, (iii) the chemical reaction scheme used in model simulation. In this study, we attempt to inverse estimation of some important chemical reactions based on the combining system of DDM (decoupled direct method) sensitivity analysis and modeled Green's function approach. The decoupled direct method (DDM) is an efficient and accurate way of performing sensitivity analysis to model inputs, calculates sensitivity coefficients representing the responsiveness of atmospheric chemical concentrations to perturbations in a model input or parameter. The inverse solutions with the Green's functions are given by a linear, least-squares method but are still robust against nonlinearities, To construct the response matrix (i.e., Green's functions), we can directly use the results of DDM sensitivity analysis. The solution of chemical reaction constants which have relatively large uncertainties are determined with constraints of observed ozone concentration data over the remote area in Japan. Our inversed estimation demonstrated that the underestimation of reaction constant to produce HNO3 (NO2 + OH + M → HNO3 + M) in SAPRC99 chemical scheme, and the inversed results indicated the +29.0 % increment to this reaction. This estimation has good agreement when compared with the CB4 and CB5, and also to the SAPRC07 estimation. For the NO2 photolysis rates, 49.4 % reduction was pronounced. This result indicates the importance of heavy aerosol effect for the change of photolysis rate must be incorporated in the numerical study.

  1. To what degree does the missing-data technique influence the estimated growth in learning strategies over time? A tutorial example of sensitivity analysis for longitudinal data.

    PubMed

    Coertjens, Liesje; Donche, Vincent; De Maeyer, Sven; Vanthournout, Gert; Van Petegem, Peter

    2017-01-01

    Longitudinal data is almost always burdened with missing data. However, in educational and psychological research, there is a large discrepancy between methodological suggestions and research practice. The former suggests applying sensitivity analysis in order to the robustness of the results in terms of varying assumptions regarding the mechanism generating the missing data. However, in research practice, participants with missing data are usually discarded by relying on listwise deletion. To help bridge the gap between methodological recommendations and applied research in the educational and psychological domain, this study provides a tutorial example of sensitivity analysis for latent growth analysis. The example data concern students' changes in learning strategies during higher education. One cohort of students in a Belgian university college was asked to complete the Inventory of Learning Styles-Short Version, in three measurement waves. A substantial number of students did not participate on each occasion. Change over time in student learning strategies was assessed using eight missing data techniques, which assume different mechanisms for missingness. The results indicated that, for some learning strategy subscales, growth estimates differed between the models. Guidelines in terms of reporting the results from sensitivity analysis are synthesised and applied to the results from the tutorial example.

  2. Sensitivity analysis of the add-on price estimate for the silicon web growth process

    NASA Technical Reports Server (NTRS)

    Mokashi, A. R.

    1981-01-01

    The web growth process, a silicon-sheet technology option, developed for the flat plate solar array (FSA) project, was examined. Base case data for the technical and cost parameters for the technical and commercial readiness phase of the FSA project are projected. The process add on price, using the base case data for cost parameters such as equipment, space, direct labor, materials and utilities, and the production parameters such as growth rate and run length, using a computer program developed specifically to do the sensitivity analysis with improved price estimation are analyzed. Silicon price, sheet thickness and cell efficiency are also discussed.

  3. New infrastructure for studies of transmutation and fast systems concepts

    NASA Astrophysics Data System (ADS)

    Panza, Fabio; Firpo, Gabriele; Lomonaco, Guglielmo; Osipenko, Mikhail; Ricco, Giovanni; Ripani, Marco; Saracco, Paolo; Viberti, Carlo Maria

    2017-09-01

    In this work we report initial studies on a low power Accelerator-Driven System as a possible experimental facility for the measurement of relevant integral nuclear quantities. In particular, we performed Monte Carlo simulations of minor actinides and fission products irradiation and estimated the fission rate within fission chambers in the reactor core and the reflector, in order to evaluate the transmutation rates and the measurement sensitivity. We also performed a photo-peak analysis of available experimental data from a research reactor, in order to estimate the expected sensitivity of this analysis method on the irradiation of samples in the ADS considered.

  4. A low power ADS for transmutation studies in fast systems

    NASA Astrophysics Data System (ADS)

    Panza, Fabio; Firpo, Gabriele; Lomonaco, Guglielmo; Osipenko, Mikhail; Ricco, Giovanni; Ripani, Marco; Saracco, Paolo; Viberti, Carlo Maria

    2017-12-01

    In this work, we report studies on a fast low power accelerator driven system model as a possible experimental facility, focusing on its capabilities in terms of measurement of relevant integral nuclear quantities. In particular, we performed Monte Carlo simulations of minor actinides and fission products irradiation and estimated the fission rate within fission chambers in the reactor core and the reflector, in order to evaluate the transmutation rates and the measurement sensitivity. We also performed a photo-peak analysis of available experimental data from a research reactor, in order to estimate the expected sensitivity of this analysis method on the irradiation of samples in the ADS considered.

  5. Reconciling uncertain costs and benefits in bayes nets for invasive species management

    USGS Publications Warehouse

    Burgman, M.A.; Wintle, B.A.; Thompson, C.A.; Moilanen, A.; Runge, M.C.; Ben-Haim, Y.

    2010-01-01

    Bayes nets are used increasingly to characterize environmental systems and formalize probabilistic reasoning to support decision making. These networks treat probabilities as exact quantities. Sensitivity analysis can be used to evaluate the importance of assumptions and parameter estimates. Here, we outline an application of info-gap theory to Bayes nets that evaluates the sensitivity of decisions to possibly large errors in the underlying probability estimates and utilities. We apply it to an example of management and eradication of Red Imported Fire Ants in Southern Queensland, Australia and show how changes in management decisions can be justified when uncertainty is considered. ?? 2009 Society for Risk Analysis.

  6. Observability Analysis of a MEMS INS/GPS Integration System with Gyroscope G-Sensitivity Errors

    PubMed Central

    Fan, Chen; Hu, Xiaoping; He, Xiaofeng; Tang, Kanghua; Luo, Bing

    2014-01-01

    Gyroscopes based on micro-electromechanical system (MEMS) technology suffer in high-dynamic applications due to obvious g-sensitivity errors. These errors can induce large biases in the gyroscope, which can directly affect the accuracy of attitude estimation in the integration of the inertial navigation system (INS) and the Global Positioning System (GPS). The observability determines the existence of solutions for compensating them. In this paper, we investigate the observability of the INS/GPS system with consideration of the g-sensitivity errors. In terms of two types of g-sensitivity coefficients matrix, we add them as estimated states to the Kalman filter and analyze the observability of three or nine elements of the coefficient matrix respectively. A global observable condition of the system is presented and validated. Experimental results indicate that all the estimated states, which include position, velocity, attitude, gyro and accelerometer bias, and g-sensitivity coefficients, could be made observable by maneuvering based on the conditions. Compared with the integration system without compensation for the g-sensitivity errors, the attitude accuracy is raised obviously. PMID:25171122

  7. Observability analysis of a MEMS INS/GPS integration system with gyroscope G-sensitivity errors.

    PubMed

    Fan, Chen; Hu, Xiaoping; He, Xiaofeng; Tang, Kanghua; Luo, Bing

    2014-08-28

    Gyroscopes based on micro-electromechanical system (MEMS) technology suffer in high-dynamic applications due to obvious g-sensitivity errors. These errors can induce large biases in the gyroscope, which can directly affect the accuracy of attitude estimation in the integration of the inertial navigation system (INS) and the Global Positioning System (GPS). The observability determines the existence of solutions for compensating them. In this paper, we investigate the observability of the INS/GPS system with consideration of the g-sensitivity errors. In terms of two types of g-sensitivity coefficients matrix, we add them as estimated states to the Kalman filter and analyze the observability of three or nine elements of the coefficient matrix respectively. A global observable condition of the system is presented and validated. Experimental results indicate that all the estimated states, which include position, velocity, attitude, gyro and accelerometer bias, and g-sensitivity coefficients, could be made observable by maneuvering based on the conditions. Compared with the integration system without compensation for the g-sensitivity errors, the attitude accuracy is raised obviously.

  8. Discrete analysis of spatial-sensitivity models

    NASA Technical Reports Server (NTRS)

    Nielsen, Kenneth R. K.; Wandell, Brian A.

    1988-01-01

    Procedures for reducing the computational burden of current models of spatial vision are described, the simplifications being consistent with the prediction of the complete model. A method for using pattern-sensitivity measurements to estimate the initial linear transformation is also proposed which is based on the assumption that detection performance is monotonic with the vector length of the sensor responses. It is shown how contrast-threshold data can be used to estimate the linear transformation needed to characterize threshold performance.

  9. Simulation-based sensitivity analysis for non-ignorably missing data.

    PubMed

    Yin, Peng; Shi, Jian Q

    2017-01-01

    Sensitivity analysis is popular in dealing with missing data problems particularly for non-ignorable missingness, where full-likelihood method cannot be adopted. It analyses how sensitively the conclusions (output) may depend on assumptions or parameters (input) about missing data, i.e. missing data mechanism. We call models with the problem of uncertainty sensitivity models. To make conventional sensitivity analysis more useful in practice we need to define some simple and interpretable statistical quantities to assess the sensitivity models and make evidence based analysis. We propose a novel approach in this paper on attempting to investigate the possibility of each missing data mechanism model assumption, by comparing the simulated datasets from various MNAR models with the observed data non-parametrically, using the K-nearest-neighbour distances. Some asymptotic theory has also been provided. A key step of this method is to plug in a plausibility evaluation system towards each sensitivity parameter, to select plausible values and reject unlikely values, instead of considering all proposed values of sensitivity parameters as in the conventional sensitivity analysis method. The method is generic and has been applied successfully to several specific models in this paper including meta-analysis model with publication bias, analysis of incomplete longitudinal data and mean estimation with non-ignorable missing data.

  10. Analysis of Sensitivity Experiments - An Expanded Primer

    DTIC Science & Technology

    2017-03-08

    diehard practitioners. The difficulty associated with mastering statistical inference presents a true dilemma. Statistics is an extremely applied...lost, perhaps forever. In other words, when on this safari, you need a guide. This report is designed to be a guide, of sorts. It focuses on analytical...estimated accurately if our analysis is to have real meaning. For this reason, the sensitivity test procedure is designed to concentrate measurements

  11. Efficient computation of parameter sensitivities of discrete stochastic chemical reaction networks.

    PubMed

    Rathinam, Muruhan; Sheppard, Patrick W; Khammash, Mustafa

    2010-01-21

    Parametric sensitivity of biochemical networks is an indispensable tool for studying system robustness properties, estimating network parameters, and identifying targets for drug therapy. For discrete stochastic representations of biochemical networks where Monte Carlo methods are commonly used, sensitivity analysis can be particularly challenging, as accurate finite difference computations of sensitivity require a large number of simulations for both nominal and perturbed values of the parameters. In this paper we introduce the common random number (CRN) method in conjunction with Gillespie's stochastic simulation algorithm, which exploits positive correlations obtained by using CRNs for nominal and perturbed parameters. We also propose a new method called the common reaction path (CRP) method, which uses CRNs together with the random time change representation of discrete state Markov processes due to Kurtz to estimate the sensitivity via a finite difference approximation applied to coupled reaction paths that emerge naturally in this representation. While both methods reduce the variance of the estimator significantly compared to independent random number finite difference implementations, numerical evidence suggests that the CRP method achieves a greater variance reduction. We also provide some theoretical basis for the superior performance of CRP. The improved accuracy of these methods allows for much more efficient sensitivity estimation. In two example systems reported in this work, speedup factors greater than 300 and 10,000 are demonstrated.

  12. Sensitivity Analysis of the Integrated Medical Model for ISS Programs

    NASA Technical Reports Server (NTRS)

    Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.

    2016-01-01

    Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral part of the overall verification, validation, and credibility review of IMM v4.0.

  13. A sensitive continuum analysis method for gamma ray spectra

    NASA Technical Reports Server (NTRS)

    Thakur, Alakh N.; Arnold, James R.

    1993-01-01

    In this work we examine ways to improve the sensitivity of the analysis procedure for gamma ray spectra with respect to small differences in the continuum (Compton) spectra. The method developed is applied to analyze gamma ray spectra obtained from planetary mapping by the Mars Observer spacecraft launched in September 1992. Calculated Mars simulation spectra and actual thick target bombardment spectra have been taken as test cases. The principle of the method rests on the extraction of continuum information from Fourier transforms of the spectra. We study how a better estimate of the spectrum from larger regions of the Mars surface will improve the analysis for smaller regions with poorer statistics. Estimation of signal within the continuum is done in the frequency domain which enables efficient and sensitive discrimination of subtle differences between two spectra. The process is compared to other methods for the extraction of information from the continuum. Finally we explore briefly the possible uses of this technique in other applications of continuum spectra.

  14. Damage classification and estimation in experimental structures using time series analysis and pattern recognition

    NASA Astrophysics Data System (ADS)

    de Lautour, Oliver R.; Omenzetter, Piotr

    2010-07-01

    Developed for studying long sequences of regularly sampled data, time series analysis methods are being increasingly investigated for the use of Structural Health Monitoring (SHM). In this research, Autoregressive (AR) models were used to fit the acceleration time histories obtained from two experimental structures: a 3-storey bookshelf structure and the ASCE Phase II Experimental SHM Benchmark Structure, in undamaged and limited number of damaged states. The coefficients of the AR models were considered to be damage-sensitive features and used as input into an Artificial Neural Network (ANN). The ANN was trained to classify damage cases or estimate remaining structural stiffness. The results showed that the combination of AR models and ANNs are efficient tools for damage classification and estimation, and perform well using small number of damage-sensitive features and limited sensors.

  15. Global Sensitivity of Simulated Water Balance Indicators Under Future Climate Change in the Colorado Basin

    DOE PAGES

    Bennett, Katrina Eleanor; Urrego Blanco, Jorge Rolando; Jonko, Alexandra; ...

    2017-11-20

    The Colorado River basin is a fundamentally important river for society, ecology and energy in the United States. Streamflow estimates are often provided using modeling tools which rely on uncertain parameters; sensitivity analysis can help determine which parameters impact model results. Despite the fact that simulated flows respond to changing climate and vegetation in the basin, parameter sensitivity of the simulations under climate change has rarely been considered. In this study, we conduct a global sensitivity analysis to relate changes in runoff, evapotranspiration, snow water equivalent and soil moisture to model parameters in the Variable Infiltration Capacity (VIC) hydrologic model.more » Here, we combine global sensitivity analysis with a space-filling Latin Hypercube sampling of the model parameter space and statistical emulation of the VIC model to examine sensitivities to uncertainties in 46 model parameters following a variance-based approach.« less

  16. Global Sensitivity of Simulated Water Balance Indicators Under Future Climate Change in the Colorado Basin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, Katrina Eleanor; Urrego Blanco, Jorge Rolando; Jonko, Alexandra

    The Colorado River basin is a fundamentally important river for society, ecology and energy in the United States. Streamflow estimates are often provided using modeling tools which rely on uncertain parameters; sensitivity analysis can help determine which parameters impact model results. Despite the fact that simulated flows respond to changing climate and vegetation in the basin, parameter sensitivity of the simulations under climate change has rarely been considered. In this study, we conduct a global sensitivity analysis to relate changes in runoff, evapotranspiration, snow water equivalent and soil moisture to model parameters in the Variable Infiltration Capacity (VIC) hydrologic model.more » Here, we combine global sensitivity analysis with a space-filling Latin Hypercube sampling of the model parameter space and statistical emulation of the VIC model to examine sensitivities to uncertainties in 46 model parameters following a variance-based approach.« less

  17. Estimating costs in the economic evaluation of medical technologies.

    PubMed

    Luce, B R; Elixhauser, A

    1990-01-01

    The complexities and nuances of evaluating the costs associated with providing medical technologies are often underestimated by analysts engaged in economic evaluations. This article describes the theoretical underpinnings of cost estimation, emphasizing the importance of accounting for opportunity costs and marginal costs. The various types of costs that should be considered in an analysis are described; a listing of specific cost elements may provide a helpful guide to analysis. The process of identifying and estimating costs is detailed, and practical recommendations for handling the challenges of cost estimation are provided. The roles of sensitivity analysis and discounting are characterized, as are determinants of the types of costs to include in an analysis. Finally, common problems facing the analyst are enumerated with suggestions for managing these problems.

  18. Two-dimensional advective transport in ground-water flow parameter estimation

    USGS Publications Warehouse

    Anderman, E.R.; Hill, M.C.; Poeter, E.P.

    1996-01-01

    Nonlinear regression is useful in ground-water flow parameter estimation, but problems of parameter insensitivity and correlation often exist given commonly available hydraulic-head and head-dependent flow (for example, stream and lake gain or loss) observations. To address this problem, advective-transport observations are added to the ground-water flow, parameter-estimation model MODFLOWP using particle-tracking methods. The resulting model is used to investigate the importance of advective-transport observations relative to head-dependent flow observations when either or both are used in conjunction with hydraulic-head observations in a simulation of the sewage-discharge plume at Otis Air Force Base, Cape Cod, Massachusetts, USA. The analysis procedure for evaluating the probable effect of new observations on the regression results consists of two steps: (1) parameter sensitivities and correlations calculated at initial parameter values are used to assess the model parameterization and expected relative contributions of different types of observations to the regression; and (2) optimal parameter values are estimated by nonlinear regression and evaluated. In the Cape Cod parameter-estimation model, advective-transport observations did not significantly increase the overall parameter sensitivity; however: (1) inclusion of advective-transport observations decreased parameter correlation enough for more unique parameter values to be estimated by the regression; (2) realistic uncertainties in advective-transport observations had a small effect on parameter estimates relative to the precision with which the parameters were estimated; and (3) the regression results and sensitivity analysis provided insight into the dynamics of the ground-water flow system, especially the importance of accurate boundary conditions. In this work, advective-transport observations improved the calibration of the model and the estimation of ground-water flow parameters, and use of regression and related techniques produced significant insight into the physical system.

  19. Parameter estimation and sensitivity analysis for a mathematical model with time delays of leukemia

    NASA Astrophysics Data System (ADS)

    Cândea, Doina; Halanay, Andrei; Rǎdulescu, Rodica; Tǎlmaci, Rodica

    2017-01-01

    We consider a system of nonlinear delay differential equations that describes the interaction between three competing cell populations: healthy, leukemic and anti-leukemia T cells involved in Chronic Myeloid Leukemia (CML) under treatment with Imatinib. The aim of this work is to establish which model parameters are the most important in the success or failure of leukemia remission under treatment using a sensitivity analysis of the model parameters. For the most significant parameters of the model which affect the evolution of CML disease during Imatinib treatment we try to estimate the realistic values using some experimental data. For these parameters, steady states are calculated and their stability is analyzed and biologically interpreted.

  20. Sensitivity Analysis of Biome-Bgc Model for Dry Tropical Forests of Vindhyan Highlands, India

    NASA Astrophysics Data System (ADS)

    Kumar, M.; Raghubanshi, A. S.

    2011-08-01

    A process-based model BIOME-BGC was run for sensitivity analysis to see the effect of ecophysiological parameters on net primary production (NPP) of dry tropical forest of India. The sensitivity test reveals that the forest NPP was highly sensitive to the following ecophysiological parameters: Canopy light extinction coefficient (k), Canopy average specific leaf area (SLA), New stem C : New leaf C (SC:LC), Maximum stomatal conductance (gs,max), C:N of fine roots (C:Nfr), All-sided to projected leaf area ratio and Canopy water interception coefficient (Wint). Therefore, these parameters need more precision and attention during estimation and observation in the field studies.

  1. Systems engineering and integration: Cost estimation and benefits analysis

    NASA Technical Reports Server (NTRS)

    Dean, ED; Fridge, Ernie; Hamaker, Joe

    1990-01-01

    Space Transportation Avionics hardware and software cost has traditionally been estimated in Phase A and B using cost techniques which predict cost as a function of various cost predictive variables such as weight, lines of code, functions to be performed, quantities of test hardware, quantities of flight hardware, design and development heritage, complexity, etc. The output of such analyses has been life cycle costs, economic benefits and related data. The major objectives of Cost Estimation and Benefits analysis are twofold: (1) to play a role in the evaluation of potential new space transportation avionics technologies, and (2) to benefit from emerging technological innovations. Both aspects of cost estimation and technology are discussed here. The role of cost analysis in the evaluation of potential technologies should be one of offering additional quantitative and qualitative information to aid decision-making. The cost analyses process needs to be fully integrated into the design process in such a way that cost trades, optimizations and sensitivities are understood. Current hardware cost models tend to primarily use weights, functional specifications, quantities, design heritage and complexity as metrics to predict cost. Software models mostly use functionality, volume of code, heritage and complexity as cost descriptive variables. Basic research needs to be initiated to develop metrics more responsive to the trades which are required for future launch vehicle avionics systems. These would include cost estimating capabilities that are sensitive to technological innovations such as improved materials and fabrication processes, computer aided design and manufacturing, self checkout and many others. In addition to basic cost estimating improvements, the process must be sensitive to the fact that no cost estimate can be quoted without also quoting a confidence associated with the estimate. In order to achieve this, better cost risk evaluation techniques are needed as well as improved usage of risk data by decision-makers. More and better ways to display and communicate cost and cost risk to management are required.

  2. Generalized Linear Covariance Analysis

    NASA Technical Reports Server (NTRS)

    Carpenter, James R.; Markley, F. Landis

    2014-01-01

    This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.

  3. A new methodology based on sensitivity analysis to simplify the recalibration of functional-structural plant models in new conditions.

    PubMed

    Mathieu, Amélie; Vidal, Tiphaine; Jullien, Alexandra; Wu, QiongLi; Chambon, Camille; Bayol, Benoit; Cournède, Paul-Henry

    2018-06-19

    Functional-structural plant models (FSPMs) describe explicitly the interactions between plants and their environment at organ to plant scale. However, the high level of description of the structure or model mechanisms makes this type of model very complex and hard to calibrate. A two-step methodology to facilitate the calibration process is proposed here. First, a global sensitivity analysis method was applied to the calibration loss function. It provided first-order and total-order sensitivity indexes that allow parameters to be ranked by importance in order to select the most influential ones. Second, the Akaike information criterion (AIC) was used to quantify the model's quality of fit after calibration with different combinations of selected parameters. The model with the lowest AIC gives the best combination of parameters to select. This methodology was validated by calibrating the model on an independent data set (same cultivar, another year) with the parameters selected in the second step. All the parameters were set to their nominal value; only the most influential ones were re-estimated. Sensitivity analysis applied to the calibration loss function is a relevant method to underline the most significant parameters in the estimation process. For the studied winter oilseed rape model, 11 out of 26 estimated parameters were selected. Then, the model could be recalibrated for a different data set by re-estimating only three parameters selected with the model selection method. Fitting only a small number of parameters dramatically increases the efficiency of recalibration, increases the robustness of the model and helps identify the principal sources of variation in varying environmental conditions. This innovative method still needs to be more widely validated but already gives interesting avenues to improve the calibration of FSPMs.

  4. Working covariance model selection for generalized estimating equations.

    PubMed

    Carey, Vincent J; Wang, You-Gan

    2011-11-20

    We investigate methods for data-based selection of working covariance models in the analysis of correlated data with generalized estimating equations. We study two selection criteria: Gaussian pseudolikelihood and a geodesic distance based on discrepancy between model-sensitive and model-robust regression parameter covariance estimators. The Gaussian pseudolikelihood is found in simulation to be reasonably sensitive for several response distributions and noncanonical mean-variance relations for longitudinal data. Application is also made to a clinical dataset. Assessment of adequacy of both correlation and variance models for longitudinal data should be routine in applications, and we describe open-source software supporting this practice. Copyright © 2011 John Wiley & Sons, Ltd.

  5. Cost of Equity Estimation in Fuel and Energy Sector Companies Based on CAPM

    NASA Astrophysics Data System (ADS)

    Kozieł, Diana; Pawłowski, Stanisław; Kustra, Arkadiusz

    2018-03-01

    The article presents cost of equity estimation of capital groups from the fuel and energy sector, listed at the Warsaw Stock Exchange, based on the Capital Asset Pricing Model (CAPM). The objective of the article was to perform a valuation of equity with the application of CAPM, based on actual financial data and stock exchange data and to carry out a sensitivity analysis of such cost, depending on the financing structure of the entity. The objective of the article formulated in this manner has determined its' structure. It focuses on presentation of substantive analyses related to the core of equity and methods of estimating its' costs, with special attention given to the CAPM. In the practical section, estimation of cost was performed according to the CAPM methodology, based on the example of leading fuel and energy companies, such as Tauron GE and PGE. Simultaneously, sensitivity analysis of such cost was performed depending on the structure of financing the company's operation.

  6. The Sensitivity of Adverse Event Cost Estimates to Diagnostic Coding Error

    PubMed Central

    Wardle, Gavin; Wodchis, Walter P; Laporte, Audrey; Anderson, Geoffrey M; Baker, Ross G

    2012-01-01

    Objective To examine the impact of diagnostic coding error on estimates of hospital costs attributable to adverse events. Data Sources Original and reabstracted medical records of 9,670 complex medical and surgical admissions at 11 hospital corporations in Ontario from 2002 to 2004. Patient specific costs, not including physician payments, were retrieved from the Ontario Case Costing Initiative database. Study Design Adverse events were identified among the original and reabstracted records using ICD10-CA (Canadian adaptation of ICD10) codes flagged as postadmission complications. Propensity score matching and multivariate regression analysis were used to estimate the cost of the adverse events and to determine the sensitivity of cost estimates to diagnostic coding error. Principal Findings Estimates of the cost of the adverse events ranged from $16,008 (metabolic derangement) to $30,176 (upper gastrointestinal bleeding). Coding errors caused the total cost attributable to the adverse events to be underestimated by 16 percent. The impact of coding error on adverse event cost estimates was highly variable at the organizational level. Conclusions Estimates of adverse event costs are highly sensitive to coding error. Adverse event costs may be significantly underestimated if the likelihood of error is ignored. PMID:22091908

  7. Clinical evaluation and validation of laboratory methods for the diagnosis of Bordetella pertussis infection: Culture, polymerase chain reaction (PCR) and anti-pertussis toxin IgG serology (IgG-PT).

    PubMed

    Lee, Adria D; Cassiday, Pamela K; Pawloski, Lucia C; Tatti, Kathleen M; Martin, Monte D; Briere, Elizabeth C; Tondella, M Lucia; Martin, Stacey W

    2018-01-01

    The appropriate use of clinically accurate diagnostic tests is essential for the detection of pertussis, a poorly controlled vaccine-preventable disease. The purpose of this study was to estimate the sensitivity and specificity of different diagnostic criteria including culture, multi-target polymerase chain reaction (PCR), anti-pertussis toxin IgG (IgG-PT) serology, and the use of a clinical case definition. An additional objective was to describe the optimal timing of specimen collection for the various tests. Clinical specimens were collected from patients with cough illness at seven locations across the United States between 2007 and 2011. Nasopharyngeal and blood specimens were collected from each patient during the enrollment visit. Patients who had been coughing for ≤ 2 weeks were asked to return in 2-4 weeks for collection of a second, convalescent blood specimen. Sensitivity and specificity of each diagnostic test were estimated using three methods-pertussis culture as the "gold standard," composite reference standard analysis (CRS), and latent class analysis (LCA). Overall, 868 patients were enrolled and 13.6% were B. pertussis positive by at least one diagnostic test. In a sample of 545 participants with non-missing data on all four diagnostic criteria, culture was 64.0% sensitive, PCR was 90.6% sensitive, and both were 100% specific by LCA. CRS and LCA methods increased the sensitivity estimates for convalescent serology and the clinical case definition over the culture-based estimates. Culture and PCR were most sensitive when performed during the first two weeks of cough; serology was optimally sensitive after the second week of cough. Timing of specimen collection in relation to onset of illness should be considered when ordering diagnostic tests for pertussis. Consideration should be given to including IgG-PT serology as a confirmatory test in the Council of State and Territorial Epidemiologists (CSTE) case definition for pertussis.

  8. Costs of trastuzumab in combination with chemotherapy for HER2-positive advanced gastric or gastroesophageal junction cancer: an economic evaluation in the Chinese context.

    PubMed

    Wu, Bin; Ye, Ming; Chen, Huafeng; Shen, Jinfang F

    2012-02-01

    Adding trastuzumab to a conventional regimen of chemotherapy can improve survival in patients with human epidermal growth factor receptor 2 (HER2)-positive advanced gastric or gastroesophageal junction (GEJ) cancer, but the economic impact of this practice is unknown. The purpose of this cost-effectiveness analysis was to estimate the effects of adding trastuzumab to standard chemotherapy in patients with HER2-positive advanced gastric or GEJ cancer on health and economic outcomes in China. A Markov model was developed to simulate the clinical course of typical patients with HER2-positive advanced gastric or GEJ cancer. Five-year quality-adjusted life-years (QALYs), costs, and incremental cost-effectiveness ratios (ICERs) were estimated. Model inputs were derived from the published literature and government sources. Direct costs were estimated from the perspective of Chinese society. One-way and probabilistic sensitivity analyses were conducted. On baseline analysis, the addition of trastuzumab increased cost and QALY by $56,004.30 (year-2010 US $) and 0.18, respectively, relative to conventional chemotherapy, resulting in an ICER of $251,667.10/QALY gained. Probabilistic sensitivity analyses supported that the addition of trastuzumab was not cost-effective. Budgetary impact analysis estimated that the annual increase in fiscal expenditures would be ~$1 billion. On univariate sensitivity analysis, the median overall survival time for conventional chemotherapy was the most influential factor with respect to the robustness of the model. The findings from the present analysis suggest that the addition of trastuzumab to conventional chemotherapy might not be cost-effective in patients with HER2-positive advanced gastric or GEJ cancer. Copyright © 2012 Elsevier HS Journals, Inc. All rights reserved.

  9. Saugus River and Tributaries Flood Damage Reduction Study: Lynn, Malden, Revere and Saugus, Massachusetts. Section 1. Feasibility Report.

    DTIC Science & Technology

    1989-12-01

    57 Table 5 Sensitivity Analysis - Point of Pines LPP 61 Table 6 Plan Comparison 64 Table 7 NED Plan Project Costs 96 Table 8 Estimated Operation...Costs 99 Table 13 Selected Plan/Estimated Annual Benefits 101 Table 14 Comparative Impacts - NED Regional Floodgate Plan 102 Table 15 Economic Analysis ...Includes detailed descriptions, plans and profiles and design considerations of the selected plan; coastal analysis of the shorefront; detailed project

  10. Modeling screening, prevention, and delaying of Alzheimer's disease: an early-stage decision analytic model

    PubMed Central

    2010-01-01

    Background Alzheimer's Disease (AD) affects a growing proportion of the population each year. Novel therapies on the horizon may slow the progress of AD symptoms and avoid cases altogether. Initiating treatment for the underlying pathology of AD would ideally be based on biomarker screening tools identifying pre-symptomatic individuals. Early-stage modeling provides estimates of potential outcomes and informs policy development. Methods A time-to-event (TTE) simulation provided estimates of screening asymptomatic patients in the general population age ≥55 and treatment impact on the number of patients reaching AD. Patients were followed from AD screen until all-cause death. Baseline sensitivity and specificity were 0.87 and 0.78, with treatment on positive screen. Treatment slowed progression by 50%. Events were scheduled using literature-based age-dependent incidences of AD and death. Results The base case results indicated increased AD free years (AD-FYs) through delays in onset and a reduction of 20 AD cases per 1000 screened individuals. Patients completely avoiding AD accounted for 61% of the incremental AD-FYs gained. Total years of treatment per 1000 screened patients was 2,611. The number-needed-to-screen was 51 and the number-needed-to-treat was 12 to avoid one case of AD. One-way sensitivity analysis indicated that duration of screening sensitivity and rescreen interval impact AD-FYs the most. A two-way sensitivity analysis found that for a test with an extended duration of sensitivity (15 years) the number of AD cases avoided was 6,000-7,000 cases for a test with higher sensitivity and specificity (0.90,0.90). Conclusions This study yielded valuable parameter range estimates at an early stage in the study of screening for AD. Analysis identified duration of screening sensitivity as a key variable that may be unavailable from clinical trials. PMID:20433705

  11. Modeling screening, prevention, and delaying of Alzheimer's disease: an early-stage decision analytic model.

    PubMed

    Furiak, Nicolas M; Klein, Robert W; Kahle-Wrobleski, Kristin; Siemers, Eric R; Sarpong, Eric; Klein, Timothy M

    2010-04-30

    Alzheimer's Disease (AD) affects a growing proportion of the population each year. Novel therapies on the horizon may slow the progress of AD symptoms and avoid cases altogether. Initiating treatment for the underlying pathology of AD would ideally be based on biomarker screening tools identifying pre-symptomatic individuals. Early-stage modeling provides estimates of potential outcomes and informs policy development. A time-to-event (TTE) simulation provided estimates of screening asymptomatic patients in the general population age > or =55 and treatment impact on the number of patients reaching AD. Patients were followed from AD screen until all-cause death. Baseline sensitivity and specificity were 0.87 and 0.78, with treatment on positive screen. Treatment slowed progression by 50%. Events were scheduled using literature-based age-dependent incidences of AD and death. The base case results indicated increased AD free years (AD-FYs) through delays in onset and a reduction of 20 AD cases per 1000 screened individuals. Patients completely avoiding AD accounted for 61% of the incremental AD-FYs gained. Total years of treatment per 1000 screened patients was 2,611. The number-needed-to-screen was 51 and the number-needed-to-treat was 12 to avoid one case of AD. One-way sensitivity analysis indicated that duration of screening sensitivity and rescreen interval impact AD-FYs the most. A two-way sensitivity analysis found that for a test with an extended duration of sensitivity (15 years) the number of AD cases avoided was 6,000-7,000 cases for a test with higher sensitivity and specificity (0.90,0.90). This study yielded valuable parameter range estimates at an early stage in the study of screening for AD. Analysis identified duration of screening sensitivity as a key variable that may be unavailable from clinical trials.

  12. Sensitivity analysis and uncertainty estimation in ash concentration simulations and tephra deposit daily forecasted at Mt. Etna, in Italy

    NASA Astrophysics Data System (ADS)

    Prestifilippo, Michele; Scollo, Simona; Tarantola, Stefano

    2015-04-01

    The uncertainty in volcanic ash forecasts may depend on our knowledge of the model input parameters and our capability to represent the dynamic of an incoming eruption. Forecasts help governments to reduce risks associated with volcanic eruptions and for this reason different kinds of analysis that help to understand the effect that each input parameter has on model outputs are necessary. We present an iterative approach based on the sequential combination of sensitivity analysis, parameter estimation procedure and Monte Carlo-based uncertainty analysis, applied to the lagrangian volcanic ash dispersal model PUFF. We modify the main input parameters as the total mass, the total grain-size distribution, the plume thickness, the shape of the eruption column, the sedimentation models and the diffusion coefficient, perform thousands of simulations and analyze the results. The study is carried out on two different Etna scenarios: the sub-plinian eruption of 22 July 1998 that formed an eruption column rising 12 km above sea level and lasted some minutes and the lava fountain eruption having features similar to the 2011-2013 events that produced eruption column high up to several kilometers above sea level and lasted some hours. Sensitivity analyses and uncertainty estimation results help us to address the measurements that volcanologists should perform during volcanic crisis to reduce the model uncertainty.

  13. Sensitivity analysis for missing dichotomous outcome data in multi-visit randomized clinical trial with randomization-based covariance adjustment.

    PubMed

    Li, Siying; Koch, Gary G; Preisser, John S; Lam, Diana; Sanchez-Kam, Matilde

    2017-01-01

    Dichotomous endpoints in clinical trials have only two possible outcomes, either directly or via categorization of an ordinal or continuous observation. It is common to have missing data for one or more visits during a multi-visit study. This paper presents a closed form method for sensitivity analysis of a randomized multi-visit clinical trial that possibly has missing not at random (MNAR) dichotomous data. Counts of missing data are redistributed to the favorable and unfavorable outcomes mathematically to address possibly informative missing data. Adjusted proportion estimates and their closed form covariance matrix estimates are provided. Treatment comparisons over time are addressed with Mantel-Haenszel adjustment for a stratification factor and/or randomization-based adjustment for baseline covariables. The application of such sensitivity analyses is illustrated with an example. An appendix outlines an extension of the methodology to ordinal endpoints.

  14. Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics.

    PubMed

    Arampatzis, Georgios; Katsoulakis, Markos A; Rey-Bellet, Luc

    2016-03-14

    We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.

  15. Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics

    NASA Astrophysics Data System (ADS)

    Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc

    2016-03-01

    We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.

  16. Bayesian Estimation of Fish Disease Prevalence from Pooled Samples Incorporating Sensitivity and Specificity

    NASA Astrophysics Data System (ADS)

    Williams, Christopher J.; Moffitt, Christine M.

    2003-03-01

    An important emerging issue in fisheries biology is the health of free-ranging populations of fish, particularly with respect to the prevalence of certain pathogens. For many years, pathologists focused on captive populations and interest was in the presence or absence of certain pathogens, so it was economically attractive to test pooled samples of fish. Recently, investigators have begun to study individual fish prevalence from pooled samples. Estimation of disease prevalence from pooled samples is straightforward when assay sensitivity and specificity are perfect, but this assumption is unrealistic. Here we illustrate the use of a Bayesian approach for estimating disease prevalence from pooled samples when sensitivity and specificity are not perfect. We also focus on diagnostic plots to monitor the convergence of the Gibbs-sampling-based Bayesian analysis. The methods are illustrated with a sample data set.

  17. Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc

    2016-03-14

    We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systemsmore » with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.« less

  18. Weight Loss and Coronary Heart Disease: Sensitivity Analysis for Unmeasured Confounding by Undiagnosed Disease.

    PubMed

    Danaei, Goodarz; Robins, James M; Young, Jessica G; Hu, Frank B; Manson, JoAnn E; Hernán, Miguel A

    2016-03-01

    Evidence for the effect of weight loss on coronary heart disease (CHD) or mortality has been mixed. The effect estimates can be confounded due to undiagnosed diseases that may affect weight loss. We used data from the Nurses' Health Study to estimate the 26-year risk of CHD under several hypothetical weight loss strategies. We applied the parametric g-formula and implemented a novel sensitivity analysis for unmeasured confounding due to undiagnosed disease by imposing a lag time for the effect of weight loss on chronic disease. Several sensitivity analyses were conducted. The estimated 26-year risk of CHD did not change under weight loss strategies using lag times from 0 to 18 years. For a 6-year lag time, the risk ratios of CHD for weight loss compared with no weight loss ranged from 1.00 (0.99, 1.02) to 1.02 (0.99, 1.05) for different degrees of weight loss with and without restricting the weight loss strategy to participants with no major chronic disease. Similarly, no protective effect of weight loss was estimated for mortality risk. In contrast, we estimated a protective effect of weight loss on risk of type 2 diabetes. We estimated that maintaining or losing weight after becoming overweight or obese does not reduce the risk of CHD or death in this cohort of middle-age US women. Unmeasured confounding, measurement error, and model misspecification are possible explanations but these did not prevent us from estimating a beneficial effect of weight loss on diabetes.

  19. Estimating causal contrasts involving intermediate variables in the presence of selection bias.

    PubMed

    Valeri, Linda; Coull, Brent A

    2016-11-20

    An important goal across the biomedical and social sciences is the quantification of the role of intermediate factors in explaining how an exposure exerts an effect on an outcome. Selection bias has the potential to severely undermine the validity of inferences on direct and indirect causal effects in observational as well as in randomized studies. The phenomenon of selection may arise through several mechanisms, and we here focus on instances of missing data. We study the sign and magnitude of selection bias in the estimates of direct and indirect effects when data on any of the factors involved in the analysis is either missing at random or not missing at random. Under some simplifying assumptions, the bias formulae can lead to nonparametric sensitivity analyses. These sensitivity analyses can be applied to causal effects on the risk difference and risk-ratio scales irrespectively of the estimation approach employed. To incorporate parametric assumptions, we also develop a sensitivity analysis for selection bias in mediation analysis in the spirit of the expectation-maximization algorithm. The approaches are applied to data from a health disparities study investigating the role of stage at diagnosis on racial disparities in colorectal cancer survival. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Quantitative analysis of vascular parameters for micro-CT imaging of vascular networks with multi-resolution.

    PubMed

    Zhao, Fengjun; Liang, Jimin; Chen, Xueli; Liu, Junting; Chen, Dongmei; Yang, Xiang; Tian, Jie

    2016-03-01

    Previous studies showed that all the vascular parameters from both the morphological and topological parameters were affected with the altering of imaging resolutions. However, neither the sensitivity analysis of the vascular parameters at multiple resolutions nor the distinguishability estimation of vascular parameters from different data groups has been discussed. In this paper, we proposed a quantitative analysis method of vascular parameters for vascular networks of multi-resolution, by analyzing the sensitivity of vascular parameters at multiple resolutions and estimating the distinguishability of vascular parameters from different data groups. Combining the sensitivity and distinguishability, we designed a hybrid formulation to estimate the integrated performance of vascular parameters in a multi-resolution framework. Among the vascular parameters, degree of anisotropy and junction degree were two insensitive parameters that were nearly irrelevant with resolution degradation; vascular area, connectivity density, vascular length, vascular junction and segment number were five parameters that could better distinguish the vascular networks from different groups and abide by the ground truth. Vascular area, connectivity density, vascular length and segment number not only were insensitive to multi-resolution but could also better distinguish vascular networks from different groups, which provided guidance for the quantification of the vascular networks in multi-resolution frameworks.

  1. Effect of different transport observations on inverse modeling results: case study of a long-term groundwater tracer test monitored at high resolution

    NASA Astrophysics Data System (ADS)

    Rasa, Ehsan; Foglia, Laura; Mackay, Douglas M.; Scow, Kate M.

    2013-11-01

    Conservative tracer experiments can provide information useful for characterizing various subsurface transport properties. This study examines the effectiveness of three different types of transport observations for sensitivity analysis and parameter estimation of a three-dimensional site-specific groundwater flow and transport model: conservative tracer breakthrough curves (BTCs), first temporal moments of BTCs ( m 1), and tracer cumulative mass discharge ( M d) through control planes combined with hydraulic head observations ( h). High-resolution data obtained from a 410-day controlled field experiment at Vandenberg Air Force Base, California (USA), have been used. In this experiment, bromide was injected to create two adjacent plumes monitored at six different transects (perpendicular to groundwater flow) with a total of 162 monitoring wells. A total of 133 different observations of transient hydraulic head, 1,158 of BTC concentration, 23 of first moment, and 36 of mass discharge were used for sensitivity analysis and parameter estimation of nine flow and transport parameters. The importance of each group of transport observations in estimating these parameters was evaluated using sensitivity analysis, and five out of nine parameters were calibrated against these data. Results showed the advantages of using temporal moment of conservative tracer BTCs and mass discharge as observations for inverse modeling.

  2. Observation-based Estimate of Climate Sensitivity with a Scaling Climate Response Function

    NASA Astrophysics Data System (ADS)

    Hébert, Raphael; Lovejoy, Shaun

    2016-04-01

    To properly adress the anthropogenic impacts upon the earth system, an estimate of the climate sensitivity to radiative forcing is essential. Observation-based estimates of climate sensitivity are often limited by their ability to take into account the slower response of the climate system imparted mainly by the large thermal inertia of oceans, they are nevertheless essential to provide an alternative to estimates from global circulation models and increase our confidence in estimates of climate sensitivity by the multiplicity of approaches. It is straightforward to calculate the Effective Climate Sensitivity(EffCS) as the ratio of temperature change to the change in radiative forcing; the result is almost identical to the Transient Climate Response(TCR), but it underestimates the Equilibrium Climate Sensitivity(ECS). A study of global mean temperature is thus presented assuming a Scaling Climate Response Function to deterministic radiative forcing. This general form is justified as there exists a scaling symmetry respected by the dynamics, and boundary conditions, over a wide range of scales and it allows for long-range dependencies while retaining only 3 parameter which are estimated empirically. The range of memory is modulated by the scaling exponent H. We can calculate, analytically, a one-to-one relation between the scaling exponent H and the ratio of EffCS to TCR and EffCS to ECS. The scaling exponent of the power law is estimated by a regression of temperature as a function of forcing. We consider for the analysis 4 different datasets of historical global mean temperature and 100 scenario runs of the Coupled Model Intercomparison Project Phase 5 distributed among the 4 Representative Concentration Pathways(RCP) scenarios. We find that the error function for the estimate on historical temperature is very wide and thus, many scaling exponent can be used without meaningful changes in the fit residuals of historical temperatures; their response in the year 2100 on the other hand, is very broad, especially for a low-emission scenario such as RCP 2.6. CMIP5 scenario runs thus allow for a narrower estimate of H which can then be used to estimate the ECS and TCR from the EffCS estimated from the historical data.

  3. Treatment strategies for pelvic organ prolapse: a cost-effectiveness analysis.

    PubMed

    Hullfish, Kathie L; Trowbridge, Elisa R; Stukenborg, George J

    2011-05-01

    To compare the relative cost effectiveness of treatment decision alternatives for post-hysterectomy pelvic organ prolapse (POP). A Markov decision analysis model was used to assess and compare the relative cost effectiveness of expectant management, use of a pessary, and surgery for obtaining months of quality-adjusted life over 1 year. Sensitivity analysis was conducted to determine whether the results depended on specific estimates of patient utilities for pessary use, probabilities for complications and other events, and estimated costs. Only two treatment alternatives were found to be efficient choices: initial pessary use and vaginal reconstructive surgery (VRS). Pessary use (including patients that eventually transitioned to surgery) achieved 10.4 quality-adjusted months, at a cost of $10,000 per patient, while VRS obtained 11.4 quality-adjusted months, at $15,000 per patient. Sensitivity analysis demonstrated that these baseline results depended on several key estimates in the model. This analysis indicates that pessary use and VRS are the most cost-effective treatment alternatives for treating post-hysterectomy vaginal prolapse. Additional research is needed to standardize POP outcomes and complications, so that healthcare providers can best utilize cost information in balancing the risks and benefits of their treatment decisions.

  4. Uncertainty and sensitivity analysis of the basic reproduction number of diphtheria: a case study of a Rohingya refugee camp in Bangladesh, November–December 2017

    PubMed Central

    Matsuyama, Ryota; Lee, Hyojung; Yamaguchi, Takayuki; Tsuzuki, Shinya

    2018-01-01

    Background A Rohingya refugee camp in Cox’s Bazar, Bangladesh experienced a large-scale diphtheria epidemic in 2017. The background information of previously immune fraction among refugees cannot be explicitly estimated, and thus we conducted an uncertainty analysis of the basic reproduction number, R0. Methods A renewal process model was devised to estimate the R0 and ascertainment rate of cases, and loss of susceptible individuals was modeled as one minus the sum of initially immune fraction and the fraction naturally infected during the epidemic. To account for the uncertainty of initially immune fraction, we employed a Latin Hypercube sampling (LHS) method. Results R0 ranged from 4.7 to 14.8 with the median estimate at 7.2. R0 was positively correlated with ascertainment rates. Sensitivity analysis indicated that R0 would become smaller with greater variance of the generation time. Discussion Estimated R0 was broadly consistent with published estimate from endemic data, indicating that the vaccination coverage of 86% has to be satisfied to prevent the epidemic by means of mass vaccination. LHS was particularly useful in the setting of a refugee camp in which the background health status is poorly quantified. PMID:29629244

  5. Uncertainty and sensitivity analysis of the basic reproduction number of diphtheria: a case study of a Rohingya refugee camp in Bangladesh, November-December 2017.

    PubMed

    Matsuyama, Ryota; Akhmetzhanov, Andrei R; Endo, Akira; Lee, Hyojung; Yamaguchi, Takayuki; Tsuzuki, Shinya; Nishiura, Hiroshi

    2018-01-01

    A Rohingya refugee camp in Cox's Bazar, Bangladesh experienced a large-scale diphtheria epidemic in 2017. The background information of previously immune fraction among refugees cannot be explicitly estimated, and thus we conducted an uncertainty analysis of the basic reproduction number, R 0 . A renewal process model was devised to estimate the R 0 and ascertainment rate of cases, and loss of susceptible individuals was modeled as one minus the sum of initially immune fraction and the fraction naturally infected during the epidemic. To account for the uncertainty of initially immune fraction, we employed a Latin Hypercube sampling (LHS) method. R 0 ranged from 4.7 to 14.8 with the median estimate at 7.2. R 0 was positively correlated with ascertainment rates. Sensitivity analysis indicated that R 0 would become smaller with greater variance of the generation time. Estimated R 0 was broadly consistent with published estimate from endemic data, indicating that the vaccination coverage of 86% has to be satisfied to prevent the epidemic by means of mass vaccination. LHS was particularly useful in the setting of a refugee camp in which the background health status is poorly quantified.

  6. Obesity, diabetes, and associated costs of exposure to endocrine-disrupting chemicals in the European Union.

    PubMed

    Legler, Juliette; Fletcher, Tony; Govarts, Eva; Porta, Miquel; Blumberg, Bruce; Heindel, Jerrold J; Trasande, Leonardo

    2015-04-01

    Obesity and diabetes are epidemic in the European Union (EU). Exposure to endocrine-disrupting chemicals (EDCs) is increasingly recognized as a contributor, independent of diet and physical activity. The objective was to estimate obesity, diabetes, and associated costs that can be reasonably attributed to EDC exposures in the EU. An expert panel evaluated evidence for probability of causation using weight-of-evidence characterization adapted from that applied by the Intergovernmental Panel on Climate Change. Exposure-response relationships and reference levels were evaluated for relevant EDCs, and biomarker data were organized from peer-reviewed studies to represent European exposure and burden of disease. Cost estimation as of 2010 utilized published cost estimates for childhood obesity, adult obesity, and adult diabetes. Setting, Patients and Participants, and Intervention: Cost estimation was performed from the societal perspective. The panel identified a 40% to 69% probability of dichlorodiphenyldichloroethylene causing 1555 cases of overweight at age 10 (sensitivity analysis: 1555-5463) in 2010 with associated costs of €24.6 million (sensitivity analysis: €24.6-86.4 million). A 20% to 39% probability was identified for dichlorodiphenyldichloroethylene causing 28 200 cases of adult diabetes (sensitivity analysis: 28 200-56 400) with associated costs of €835 million (sensitivity analysis: €835 million-16.6 billion). The panel also identified a 40% to 69% probability of phthalate exposure causing 53 900 cases of obesity in older women and €15.6 billion in associated costs. Phthalate exposure was also found to have a 40% to 69% probability of causing 20 500 new-onset cases of diabetes in older women with €607 million in associated costs. Prenatal bisphenol A exposure was identified to have a 20% to 69% probability of causing 42 400 cases of childhood obesity, with associated lifetime costs of €1.54 billion. EDC exposures in the EU contribute substantially to obesity and diabetes, with a moderate probability of >€18 billion costs per year. This is a conservative estimate; the results emphasize the need to control EDC exposures.

  7. Cost effectiveness analysis of immunotherapy in patients with grass pollen allergic rhinoconjunctivitis in Germany.

    PubMed

    Westerhout, K Y; Verheggen, B G; Schreder, C H; Augustin, M

    2012-01-01

    An economic evaluation was conducted to assess the outcomes and costs as well as cost-effectiveness of the following grass-pollen immunotherapies: OA (Oralair; Stallergenes S.A., Antony, France) vs GRZ (Grazax; ALK-Abelló, Hørsholm, Denmark), and ALD (Alk Depot SQ; ALK-Abelló) (immunotherapy agents alongside symptomatic medication) and symptomatic treatment alone for grass pollen allergic rhinoconjunctivitis. The costs and outcomes of 3-year treatment were assessed for a period of 9 years using a Markov model. Treatment efficacy was estimated using an indirect comparison of available clinical trials with placebo as a common comparator. Estimates for immunotherapy discontinuation, occurrence of asthma, health state utilities, drug costs, resource use, and healthcare costs were derived from published sources. The analysis was conducted from the insurant's perspective including public and private health insurance payments and co-payments by insurants. Outcomes were reported as quality-adjusted life years (QALYs) and symptom-free days. The uncertainty around incremental model results was tested by means of extensive deterministic univariate and probabilistic multivariate sensitivity analyses. In the base case analysis the model predicted a cost-utility ratio of OA vs symptomatic treatment of €14,728 per QALY; incremental costs were €1356 (95%CI: €1230; €1484) and incremental QALYs 0.092 (95%CI: 0.052; 0.140). OA was the dominant strategy compared to GRZ and ALD, with estimated incremental costs of -€1142 (95%CI: -€1255; -€1038) and -€54 (95%CI: -€188; €85) and incremental QALYs of 0.015 (95%CI: -0.025; 0.056) and 0.027 (95%CI: -0.022; 0.075), respectively. At a willingness-to-pay threshold of €20,000, the probability of OA being the most cost-effective treatment was predicted to be 79%. Univariate sensitivity analyses show that incremental outcomes were moderately sensitive to changes in efficacy estimates. The main study limitation was the requirement of an indirect comparison involving several steps to assess relative treatment effects. The analysis suggests OA to be cost-effective compared to GRZ and ALD, and a symptomatic treatment. Sensitivity analyses showed that uncertainty surrounding treatment efficacy estimates affected the model outcomes.

  8. Obesity, Diabetes, and Associated Costs of Exposure to Endocrine-Disrupting Chemicals in the European Union

    PubMed Central

    Legler, Juliette; Fletcher, Tony; Govarts, Eva; Porta, Miquel; Blumberg, Bruce; Heindel, Jerrold J.

    2015-01-01

    Context: Obesity and diabetes are epidemic in the European Union (EU). Exposure to endocrine-disrupting chemicals (EDCs) is increasingly recognized as a contributor, independent of diet and physical activity. Objective: The objective was to estimate obesity, diabetes, and associated costs that can be reasonably attributed to EDC exposures in the EU. Design: An expert panel evaluated evidence for probability of causation using weight-of-evidence characterization adapted from that applied by the Intergovernmental Panel on Climate Change. Exposure-response relationships and reference levels were evaluated for relevant EDCs, and biomarker data were organized from peer-reviewed studies to represent European exposure and burden of disease. Cost estimation as of 2010 utilized published cost estimates for childhood obesity, adult obesity, and adult diabetes. Setting, Patients and Participants, and Intervention: Cost estimation was performed from the societal perspective. Results: The panel identified a 40% to 69% probability of dichlorodiphenyldichloroethylene causing 1555 cases of overweight at age 10 (sensitivity analysis: 1555–5463) in 2010 with associated costs of €24.6 million (sensitivity analysis: €24.6–86.4 million). A 20% to 39% probability was identified for dichlorodiphenyldichloroethylene causing 28 200 cases of adult diabetes (sensitivity analysis: 28 200–56 400) with associated costs of €835 million (sensitivity analysis: €835 million–16.6 billion). The panel also identified a 40% to 69% probability of phthalate exposure causing 53 900 cases of obesity in older women and €15.6 billion in associated costs. Phthalate exposure was also found to have a 40% to 69% probability of causing 20 500 new-onset cases of diabetes in older women with €607 million in associated costs. Prenatal bisphenol A exposure was identified to have a 20% to 69% probability of causing 42 400 cases of childhood obesity, with associated lifetime costs of €1.54 billion. Conclusions: EDC exposures in the EU contribute substantially to obesity and diabetes, with a moderate probability of >€18 billion costs per year. This is a conservative estimate; the results emphasize the need to control EDC exposures. PMID:25742518

  9. Upper limb strength estimation of physically impaired persons using a musculoskeletal model: A sensitivity analysis.

    PubMed

    Carmichael, Marc G; Liu, Dikai

    2015-01-01

    Sensitivity of upper limb strength calculated from a musculoskeletal model was analyzed, with focus on how the sensitivity is affected when the model is adapted to represent a person with physical impairment. Sensitivity was calculated with respect to four muscle-tendon parameters: muscle peak isometric force, muscle optimal length, muscle pennation, and tendon slack length. Results obtained from a musculoskeletal model of average strength showed highest sensitivity to tendon slack length, followed by muscle optimal length and peak isometric force, which is consistent with existing studies. Muscle pennation angle was relatively insensitive. The analysis was repeated after adapting the musculoskeletal model to represent persons with varying severities of physical impairment. Results showed that utilizing the weakened model significantly increased the sensitivity of the calculated strength at the hand, with parameters previously insensitive becoming highly sensitive. This increased sensitivity presents a significant challenge in applications utilizing musculoskeletal models to represent impaired individuals.

  10. Ultrasonography for endoleak detection after endoluminal abdominal aortic aneurysm repair.

    PubMed

    Abraha, Iosief; Luchetta, Maria Laura; De Florio, Rita; Cozzolino, Francesco; Casazza, Giovanni; Duca, Piergiorgio; Parente, Basso; Orso, Massimiliano; Germani, Antonella; Eusebi, Paolo; Montedori, Alessandro

    2017-06-09

    People with abdominal aortic aneurysm who receive endovascular aneurysm repair (EVAR) need lifetime surveillance to detect potential endoleaks. Endoleak is defined as persistent blood flow within the aneurysm sac following EVAR. Computed tomography (CT) angiography is considered the reference standard for endoleak surveillance. Colour duplex ultrasound (CDUS) and contrast-enhanced CDUS (CE-CDUS) are less invasive but considered less accurate than CT. To determine the diagnostic accuracy of colour duplex ultrasound (CDUS) and contrast-enhanced-colour duplex ultrasound (CE-CDUS) in terms of sensitivity and specificity for endoleak detection after endoluminal abdominal aortic aneurysm repair (EVAR). We searched MEDLINE, Embase, LILACS, ISI Conference Proceedings, Zetoc, and trial registries in June 2016 without language restrictions and without use of filters to maximize sensitivity. Any cross-sectional diagnostic study evaluating participants who received EVAR by both ultrasound (with or without contrast) and CT scan assessed at regular intervals. Two pairs of review authors independently extracted data and assessed quality of included studies using the QUADAS 1 tool. A third review author resolved discrepancies. The unit of analysis was number of participants for the primary analysis and number of scans performed for the secondary analysis. We carried out a meta-analysis to estimate sensitivity and specificity of CDUS or CE-CDUS using a bivariate model. We analysed each index test separately. As potential sources of heterogeneity, we explored year of publication, characteristics of included participants (age and gender), direction of the study (retrospective, prospective), country of origin, number of CDUS operators, and ultrasound manufacturer. We identified 42 primary studies with 4220 participants. Twenty studies provided accuracy data based on the number of individual participants (seven of which provided data with and without the use of contrast). Sixteen of these studies evaluated the accuracy of CDUS. These studies were generally of moderate to low quality: only three studies fulfilled all the QUADAS items; in six (40%) of the studies, the delay between the tests was unclear or longer than four weeks; in eight (50%), the blinding of either the index test or the reference standard was not clearly reported or was not performed; and in two studies (12%), the interpretation of the reference standard was not clearly reported. Eleven studies evaluated the accuracy of CE-CDUS. These studies were of better quality than the CDUS studies: five (45%) studies fulfilled all the QUADAS items; four (36%) did not report clearly the blinding interpretation of the reference standard; and two (18%) did not clearly report the delay between the two tests.Based on the bivariate model, the summary estimates for CDUS were 0.82 (95% confidence interval (CI) 0.66 to 0.91) for sensitivity and 0.93 (95% CI 0.87 to 0.96) for specificity whereas for CE-CDUS the estimates were 0.94 (95% CI 0.85 to 0.98) for sensitivity and 0.95 (95% CI 0.90 to 0.98) for specificity. Regression analysis showed that CE-CDUS was superior to CDUS in terms of sensitivity (LR Chi 2 = 5.08, 1 degree of freedom (df); P = 0.0242 for model improvement).Seven studies provided estimates before and after administration of contrast. Sensitivity before contrast was 0.67 (95% CI 0.47 to 0.83) and after contrast was 0.97 (95% CI 0.92 to 0.99). The improvement in sensitivity with of contrast use was statistically significant (LR Chi 2 = 13.47, 1 df; P = 0.0002 for model improvement).Regression testing showed evidence of statistically significant effect bias related to year of publication and study quality within individual participants based CDUS studies. Sensitivity estimates were higher in the studies published before 2006 than the estimates obtained from studies published in 2006 or later (P < 0.001); and studies judged as low/unclear quality provided higher estimates in sensitivity. When regression testing was applied to the individual based CE-CDUS studies, none of the items, namely direction of the study design, quality, and age, were identified as a source of heterogeneity.Twenty-two studies provided accuracy data based on number of scans performed (of which four provided data with and without the use of contrast). Analysis of the studies that provided scan based data showed similar results. Summary estimates for CDUS (18 studies) showed 0.72 (95% CI 0.55 to 0.85) for sensitivity and 0.95 (95% CI 0.90 to 0.96) for specificity whereas summary estimates for CE-CDUS (eight studies) were 0.91 (95% CI 0.68 to 0.98) for sensitivity and 0.89 (95% CI 0.71 to 0.96) for specificity. This review demonstrates that both ultrasound modalities (with or without contrast) showed high specificity. For ruling in endoleaks, CE-CDUS appears superior to CDUS. In an endoleak surveillance programme CE-CDUS can be introduced as a routine diagnostic modality followed by CT scan only when the ultrasound is positive to establish the type of endoleak and the subsequent therapeutic management.

  11. Interpreting ambiguous 'trace' results in Schistosoma mansoni CCA Tests: Estimating sensitivity and specificity of ambiguous results with no gold standard.

    PubMed

    Clements, Michelle N; Donnelly, Christl A; Fenwick, Alan; Kabatereine, Narcis B; Knowles, Sarah C L; Meité, Aboulaye; N'Goran, Eliézer K; Nalule, Yolisa; Nogaro, Sarah; Phillips, Anna E; Tukahebwa, Edridah Muheki; Fleming, Fiona M

    2017-12-01

    The development of new diagnostics is an important tool in the fight against disease. Latent Class Analysis (LCA) is used to estimate the sensitivity and specificity of tests in the absence of a gold standard. The main field diagnostic for Schistosoma mansoni infection, Kato-Katz (KK), is not very sensitive at low infection intensities. A point-of-care circulating cathodic antigen (CCA) test has been shown to be more sensitive than KK. However, CCA can return an ambiguous 'trace' result between 'positive' and 'negative', and much debate has focused on interpretation of traces results. We show how LCA can be extended to include ambiguous trace results and analyse S. mansoni studies from both Côte d'Ivoire (CdI) and Uganda. We compare the diagnostic performance of KK and CCA and the observed results by each test to the estimated infection prevalence in the population. Prevalence by KK was higher in CdI (13.4%) than in Uganda (6.1%), but prevalence by CCA was similar between countries, both when trace was assumed to be negative (CCAtn: 11.7% in CdI and 9.7% in Uganda) and positive (CCAtp: 20.1% in CdI and 22.5% in Uganda). The estimated sensitivity of CCA was more consistent between countries than the estimated sensitivity of KK, and estimated infection prevalence did not significantly differ between CdI (20.5%) and Uganda (19.1%). The prevalence by CCA with trace as positive did not differ significantly from estimates of infection prevalence in either country, whereas both KK and CCA with trace as negative significantly underestimated infection prevalence in both countries. Incorporation of ambiguous results into an LCA enables the effect of different treatment thresholds to be directly assessed and is applicable in many fields. Our results showed that CCA with trace as positive most accurately estimated infection prevalence.

  12. Interpreting ambiguous ‘trace’ results in Schistosoma mansoni CCA Tests: Estimating sensitivity and specificity of ambiguous results with no gold standard

    PubMed Central

    Donnelly, Christl A.; Fenwick, Alan; Kabatereine, Narcis B.; Knowles, Sarah C. L.; Meité, Aboulaye; N'Goran, Eliézer K.; Nalule, Yolisa; Nogaro, Sarah; Phillips, Anna E.; Tukahebwa, Edridah Muheki; Fleming, Fiona M.

    2017-01-01

    Background The development of new diagnostics is an important tool in the fight against disease. Latent Class Analysis (LCA) is used to estimate the sensitivity and specificity of tests in the absence of a gold standard. The main field diagnostic for Schistosoma mansoni infection, Kato-Katz (KK), is not very sensitive at low infection intensities. A point-of-care circulating cathodic antigen (CCA) test has been shown to be more sensitive than KK. However, CCA can return an ambiguous ‘trace’ result between ‘positive’ and ‘negative’, and much debate has focused on interpretation of traces results. Methodology/Principle findings We show how LCA can be extended to include ambiguous trace results and analyse S. mansoni studies from both Côte d’Ivoire (CdI) and Uganda. We compare the diagnostic performance of KK and CCA and the observed results by each test to the estimated infection prevalence in the population. Prevalence by KK was higher in CdI (13.4%) than in Uganda (6.1%), but prevalence by CCA was similar between countries, both when trace was assumed to be negative (CCAtn: 11.7% in CdI and 9.7% in Uganda) and positive (CCAtp: 20.1% in CdI and 22.5% in Uganda). The estimated sensitivity of CCA was more consistent between countries than the estimated sensitivity of KK, and estimated infection prevalence did not significantly differ between CdI (20.5%) and Uganda (19.1%). The prevalence by CCA with trace as positive did not differ significantly from estimates of infection prevalence in either country, whereas both KK and CCA with trace as negative significantly underestimated infection prevalence in both countries. Conclusions Incorporation of ambiguous results into an LCA enables the effect of different treatment thresholds to be directly assessed and is applicable in many fields. Our results showed that CCA with trace as positive most accurately estimated infection prevalence. PMID:29220354

  13. Modeling whole-tree carbon assimilation rate using observed transpiration rates and needle sugar carbon isotope ratios.

    PubMed

    Hu, Jia; Moore, David J P; Riveros-Iregui, Diego A; Burns, Sean P; Monson, Russell K

    2010-03-01

    *Understanding controls over plant-atmosphere CO(2) exchange is important for quantifying carbon budgets across a range of spatial and temporal scales. In this study, we used a simple approach to estimate whole-tree CO(2) assimilation rate (A(Tree)) in a subalpine forest ecosystem. *We analysed the carbon isotope ratio (delta(13)C) of extracted needle sugars and combined it with the daytime leaf-to-air vapor pressure deficit to estimate tree water-use efficiency (WUE). The estimated WUE was then combined with observations of tree transpiration rate (E) using sap flow techniques to estimate A(Tree). Estimates of A(Tree) for the three dominant tree species in the forest were combined with species distribution and tree size to estimate and gross primary productivity (GPP) using an ecosystem process model. *A sensitivity analysis showed that estimates of A(Tree) were more sensitive to dynamics in E than delta(13)C. At the ecosystem scale, the abundance of lodgepole pine trees influenced seasonal dynamics in GPP considerably more than Engelmann spruce and subalpine fir because of its greater sensitivity of E to seasonal climate variation. *The results provide the framework for a nondestructive method for estimating whole-tree carbon assimilation rate and ecosystem GPP over daily-to weekly time scales.

  14. Critical thresholds in sea lice epidemics: evidence, sensitivity and subcritical estimation

    PubMed Central

    Frazer, L. Neil; Morton, Alexandra; Krkošek, Martin

    2012-01-01

    Host density thresholds are a fundamental component of the population dynamics of pathogens, but empirical evidence and estimates are lacking. We studied host density thresholds in the dynamics of ectoparasitic sea lice (Lepeophtheirus salmonis) on salmon farms. Empirical examples include a 1994 epidemic in Atlantic Canada and a 2001 epidemic in Pacific Canada. A mathematical model suggests dynamics of lice are governed by a stable endemic equilibrium until the critical host density threshold drops owing to environmental change, or is exceeded by stocking, causing epidemics that require rapid harvest or treatment. Sensitivity analysis of the critical threshold suggests variation in dependence on biotic parameters and high sensitivity to temperature and salinity. We provide a method for estimating the critical threshold from parasite abundances at subcritical host densities and estimate the critical threshold and transmission coefficient for the two epidemics. Host density thresholds may be a fundamental component of disease dynamics in coastal seas where salmon farming occurs. PMID:22217721

  15. Singularity-sensitive gauge-based radar rainfall adjustment methods for urban hydrological applications

    NASA Astrophysics Data System (ADS)

    Wang, L.-P.; Ochoa-Rodríguez, S.; Onof, C.; Willems, P.

    2015-09-01

    Gauge-based radar rainfall adjustment techniques have been widely used to improve the applicability of radar rainfall estimates to large-scale hydrological modelling. However, their use for urban hydrological applications is limited as they were mostly developed based upon Gaussian approximations and therefore tend to smooth off so-called "singularities" (features of a non-Gaussian field) that can be observed in the fine-scale rainfall structure. Overlooking the singularities could be critical, given that their distribution is highly consistent with that of local extreme magnitudes. This deficiency may cause large errors in the subsequent urban hydrological modelling. To address this limitation and improve the applicability of adjustment techniques at urban scales, a method is proposed herein which incorporates a local singularity analysis into existing adjustment techniques and allows the preservation of the singularity structures throughout the adjustment process. In this paper the proposed singularity analysis is incorporated into the Bayesian merging technique and the performance of the resulting singularity-sensitive method is compared with that of the original Bayesian (non singularity-sensitive) technique and the commonly used mean field bias adjustment. This test is conducted using as case study four storm events observed in the Portobello catchment (53 km2) (Edinburgh, UK) during 2011 and for which radar estimates, dense rain gauge and sewer flow records, as well as a recently calibrated urban drainage model were available. The results suggest that, in general, the proposed singularity-sensitive method can effectively preserve the non-normality in local rainfall structure, while retaining the ability of the original adjustment techniques to generate nearly unbiased estimates. Moreover, the ability of the singularity-sensitive technique to preserve the non-normality in rainfall estimates often leads to better reproduction of the urban drainage system's dynamics, particularly of peak runoff flows.

  16. Development of a new semi-analytical model for cross-borehole flow experiments in fractured media

    USGS Publications Warehouse

    Roubinet, Delphine; Irving, James; Day-Lewis, Frederick D.

    2015-01-01

    Analysis of borehole flow logs is a valuable technique for identifying the presence of fractures in the subsurface and estimating properties such as fracture connectivity, transmissivity and storativity. However, such estimation requires the development of analytical and/or numerical modeling tools that are well adapted to the complexity of the problem. In this paper, we present a new semi-analytical formulation for cross-borehole flow in fractured media that links transient vertical-flow velocities measured in one or a series of observation wells during hydraulic forcing to the transmissivity and storativity of the fractures intersected by these wells. In comparison with existing models, our approach presents major improvements in terms of computational expense and potential adaptation to a variety of fracture and experimental configurations. After derivation of the formulation, we demonstrate its application in the context of sensitivity analysis for a relatively simple two-fracture synthetic problem, as well as for field-data analysis to investigate fracture connectivity and estimate fracture hydraulic properties. These applications provide important insights regarding (i) the strong sensitivity of fracture property estimates to the overall connectivity of the system; and (ii) the non-uniqueness of the corresponding inverse problem for realistic fracture configurations.

  17. Regional surface soil heat flux estimate from multiple remote sensing data in a temperate and semiarid basin

    NASA Astrophysics Data System (ADS)

    Li, Nana; Jia, Li; Lu, Jing; Menenti, Massimo; Zhou, Jie

    2017-01-01

    The regional surface soil heat flux (G0) estimation is very important for the large-scale land surface process modeling. However, most of the regional G0 estimation methods are based on the empirical relationship between G0 and the net radiation flux. A physical model based on harmonic analysis was improved (referred to as "HM model") and applied over the Heihe River Basin northwest China with multiple remote sensing data, e.g., FY-2C, AMSR-E, and MODIS, and soil map data. The sensitivity analysis of the model was studied as well. The results show that the improved model describes the variation of G0 well. Land surface temperature (LST) and thermal inertia (Γ) are the two key input variables to the HM model. Compared with in situ G0, there are some differences, mainly due to the differences between remote-sensed LST and the in situ LST. The sensitivity analysis shows that the errors from -7 to -0.5 K in LST amplitude and from -300 to 300 J m-2 K-1 s-0.5 in Γ will cause about 20% errors, which are acceptable for G0 estimation.

  18. True covariance simulation of the EUVE update filter

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.; Harman, R. R.

    1989-01-01

    A covariance analysis of the performance and sensitivity of the attitude determination Extended Kalman Filter (EKF) used by the On Board Computer (OBC) of the Extreme Ultra Violet Explorer (EUVE) spacecraft is presented. The linearized dynamics and measurement equations of the error states are derived which constitute the truth model describing the real behavior of the systems involved. The design model used by the OBC EKF is then obtained by reducing the order of the truth model. The covariance matrix of the EKF which uses the reduced order model is not the correct covariance of the EKF estimation error. A true covariance analysis has to be carried out in order to evaluate the correct accuracy of the OBC generated estimates. The results of such analysis are presented which indicate both the performance and the sensitivity of the OBC EKF.

  19. Inaccurate Estimation of Disparities Due to Mischievous Responders: Several Suggestions to Assess Conclusions

    ERIC Educational Resources Information Center

    Robinson-Cimpian, Joseph P.

    2014-01-01

    This article introduces novel sensitivity-analysis procedures for investigating and reducing the bias that mischievous responders (i.e., youths who provide extreme, and potentially untruthful, responses to multiple questions) often introduce in adolescent disparity estimates based on data from self-administered questionnaires (SAQs). Mischievous…

  20. Current estimates of the cure fraction: a feasibility study of statistical cure for breast and colorectal cancer.

    PubMed

    Stedman, Margaret R; Feuer, Eric J; Mariotto, Angela B

    2014-11-01

    The probability of cure is a long-term prognostic measure of cancer survival. Estimates of the cure fraction, the proportion of patients "cured" of the disease, are based on extrapolating survival models beyond the range of data. The objective of this work is to evaluate the sensitivity of cure fraction estimates to model choice and study design. Data were obtained from the Surveillance, Epidemiology, and End Results (SEER)-9 registries to construct a cohort of breast and colorectal cancer patients diagnosed from 1975 to 1985. In a sensitivity analysis, cure fraction estimates are compared from different study designs with short- and long-term follow-up. Methods tested include: cause-specific and relative survival, parametric mixture, and flexible models. In a separate analysis, estimates are projected for 2008 diagnoses using study designs including the full cohort (1975-2008 diagnoses) and restricted to recent diagnoses (1998-2008) with follow-up to 2009. We show that flexible models often provide higher estimates of the cure fraction compared to parametric mixture models. Log normal models generate lower estimates than Weibull parametric models. In general, 12 years is enough follow-up time to estimate the cure fraction for regional and distant stage colorectal cancer but not for breast cancer. 2008 colorectal cure projections show a 15% increase in the cure fraction since 1985. Estimates of the cure fraction are model and study design dependent. It is best to compare results from multiple models and examine model fit to determine the reliability of the estimate. Early-stage cancers are sensitive to survival type and follow-up time because of their longer survival. More flexible models are susceptible to slight fluctuations in the shape of the survival curve which can influence the stability of the estimate; however, stability may be improved by lengthening follow-up and restricting the cohort to reduce heterogeneity in the data. Published by Oxford University Press 2014.

  1. The challenge of modelling nitrogen management at the field scale: simulation and sensitivity analysis of N2O fluxes across nine experimental sites using DailyDayCent

    NASA Astrophysics Data System (ADS)

    Fitton, N.; Datta, A.; Hastings, A.; Kuhnert, M.; Topp, C. F. E.; Cloy, J. M.; Rees, R. M.; Cardenas, L. M.; Williams, J. R.; Smith, K.; Chadwick, D.; Smith, P.

    2014-09-01

    The United Kingdom currently reports nitrous oxide emissions from agriculture using the IPCC default Tier 1 methodology. However Tier 1 estimates have a large degree of uncertainty as they do not account for spatial variations in emissions. Therefore biogeochemical models such as DailyDayCent (DDC) are increasingly being used to provide a spatially disaggregated assessment of annual emissions. Prior to use, an assessment of the ability of the model to predict annual emissions should be undertaken, coupled with an analysis of how model inputs influence model outputs, and whether the modelled estimates are more robust that those derived from the Tier 1 methodology. The aims of the study were (a) to evaluate if the DailyDayCent model can accurately estimate annual N2O emissions across nine different experimental sites, (b) to examine its sensitivity to different soil and climate inputs across a number of experimental sites and (c) to examine the influence of uncertainty in the measured inputs on modelled N2O emissions. DailyDayCent performed well across the range of cropland and grassland sites, particularly for fertilized fields indicating that it is robust for UK conditions. The sensitivity of the model varied across the sites and also between fertilizer/manure treatments. Overall our results showed that there was a stronger correlation between the sensitivity of N2O emissions to changes in soil pH and clay content than the remaining input parameters used in this study. The lower the initial site values for soil pH and clay content, the more sensitive DDC was to changes from their initial value. When we compared modelled estimates with Tier 1 estimates for each site, we found that DailyDayCent provided a more accurate representation of the rate of annual emissions.

  2. Sensitivity analysis of automatic flight control systems using singular value concepts

    NASA Technical Reports Server (NTRS)

    Herrera-Vaillard, A.; Paduano, J.; Downing, D.

    1985-01-01

    A sensitivity analysis is presented that can be used to judge the impact of vehicle dynamic model variations on the relative stability of multivariable continuous closed-loop control systems. The sensitivity analysis uses and extends the singular-value concept by developing expressions for the gradients of the singular value with respect to variations in the vehicle dynamic model and the controller design. Combined with a priori estimates of the accuracy of the model, the gradients are used to identify the elements in the vehicle dynamic model and controller that could severely impact the system's relative stability. The technique is demonstrated for a yaw/roll damper stability augmentation designed for a business jet.

  3. Modification and Validation of the Triglyceride-to-HDL Cholesterol Ratio as a Surrogate of Insulin Sensitivity in White Juveniles and Adults without Diabetes Mellitus: The Single Point Insulin Sensitivity Estimator (SPISE).

    PubMed

    Paulmichl, Katharina; Hatunic, Mensud; Højlund, Kurt; Jotic, Aleksandra; Krebs, Michael; Mitrakou, Asimina; Porcellati, Francesca; Tura, Andrea; Bergsten, Peter; Forslund, Anders; Manell, Hannes; Widhalm, Kurt; Weghuber, Daniel; Anderwald, Christian-Heinz

    2016-09-01

    The triglyceride-to-HDL cholesterol (TG/HDL-C) ratio was introduced as a tool to estimate insulin resistance, because circulating lipid measurements are available in routine settings. Insulin, C-peptide, and free fatty acids are components of other insulin-sensitivity indices but their measurement is expensive. Easier and more affordable tools are of interest for both pediatric and adult patients. Study participants from the Relationship Between Insulin Sensitivity and Cardiovascular Disease [43.9 (8.3) years, n = 1260] as well as the Beta-Cell Function in Juvenile Diabetes and Obesity study cohorts [15 (1.9) years, n = 29] underwent oral-glucose-tolerance tests and euglycemic clamp tests for estimation of whole-body insulin sensitivity and calculation of insulin sensitivity indices. To refine the TG/HDL ratio, mathematical modeling was applied including body mass index (BMI), fasting TG, and HDL cholesterol and compared to the clamp-derived M-value as an estimate of insulin sensitivity. Each modeling result was scored by identifying insulin resistance and correlation coefficient. The Single Point Insulin Sensitivity Estimator (SPISE) was compared to traditional insulin sensitivity indices using area under the ROC curve (aROC) analysis and χ(2) test. The novel formula for SPISE was computed as follows: SPISE = 600 × HDL-C(0.185)/(TG(0.2) × BMI(1.338)), with fasting HDL-C (mg/dL), fasting TG concentrations (mg/dL), and BMI (kg/m(2)). A cutoff value of 6.61 corresponds to an M-value smaller than 4.7 mg · kg(-1) · min(-1) (aROC, M:0.797). SPISE showed a significantly better aROC than the TG/HDL-C ratio. SPISE aROC was comparable to the Matsuda ISI (insulin sensitivity index) and equal to the QUICKI (quantitative insulin sensitivity check index) and HOMA-IR (homeostasis model assessment-insulin resistance) when calculated with M-values. The SPISE seems well suited to surrogate whole-body insulin sensitivity from inexpensive fasting single-point blood draw and BMI in white adolescents and adults. © 2016 American Association for Clinical Chemistry.

  4. When does mass screening for open neural tube defects in low-risk pregnancies result in cost savings?

    PubMed Central

    Tosi, L L; Detsky, A S; Roye, D P; Morden, M L

    1987-01-01

    Using a decision analysis model, we estimated the savings that might be derived from a mass prenatal screening program aimed at detecting open neural tube defects (NTDs) in low-risk pregnancies. Our baseline analysis showed that screening v. no screening could be expected to save approximately $8 per pregnancy given a cost of $7.50 for the maternal serum alpha-feto-protein (MSAFP) test and a cost of $42,507 for hospital and rehabilitation services for the first 10 years of life for a child with spina bifida. When a more liberal estimate of the costs of caring for such a child was used, the savings with the screening program were more substantial. We performed extensive sensitivity analyses, which showed that the savings were somewhat sensitive to the cost of the MSAFP test and highly sensitive to the specificity (but not the sensitivity) of the test. A screening program for NTDs in low-risk pregnancies may result in substantial savings in direct health care costs if the screening protocol is followed rigorously and efficiently. PMID:2433011

  5. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    NASA Astrophysics Data System (ADS)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  6. Accuracy of Herdsmen Reporting versus Serologic Testing for Estimating Foot-and-Mouth Disease Prevalence

    PubMed Central

    Handel, Ian G.; Tanya, Vincent N.; Hamman, Saidou M.; Nfon, Charles; Bergman, Ingrid E.; Malirat, Viviana; Sorensen, Karl J.; Bronsvoort, Barend M. de C.

    2014-01-01

    Herdsman-reported disease prevalence is widely used in veterinary epidemiologic studies, especially for diseases with visible external lesions; however, the accuracy of such reports is rarely validated. Thus, we used latent class analysis in a Bayesian framework to compare sensitivity and specificity of herdsman reporting with virus neutralization testing and use of 3 nonstructural protein ELISAs for estimates of foot-and-mouth disease (FMD) prevalence on the Adamawa plateau of Cameroon in 2000. Herdsman-reported estimates in this FMD-endemic area were comparable to those obtained from serologic testing. To harness to this cost-effective resource of monitoring emerging infectious diseases, we suggest that estimates of the sensitivity and specificity of herdsmen reporting should be done in parallel with serologic surveys of other animal diseases. PMID:25417556

  7. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.« less

  8. Global sensitivity analysis in stochastic simulators of uncertain reaction networks.

    PubMed

    Navarro Jimenez, M; Le Maître, O P; Knio, O M

    2016-12-28

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.

  9. Global sensitivity analysis in stochastic simulators of uncertain reaction networks

    DOE PAGES

    Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.

    2016-12-23

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol’s decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes thatmore » the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. Here, a sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.« less

  10. Global sensitivity analysis in stochastic simulators of uncertain reaction networks

    NASA Astrophysics Data System (ADS)

    Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.

    2016-12-01

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.

  11. Margin and sensitivity methods for security analysis of electric power systems

    NASA Astrophysics Data System (ADS)

    Greene, Scott L.

    Reliable operation of large scale electric power networks requires that system voltages and currents stay within design limits. Operation beyond those limits can lead to equipment failures and blackouts. Security margins measure the amount by which system loads or power transfers can change before a security violation, such as an overloaded transmission line, is encountered. This thesis shows how to efficiently compute security margins defined by limiting events and instabilities, and the sensitivity of those margins with respect to assumptions, system parameters, operating policy, and transactions. Security margins to voltage collapse blackouts, oscillatory instability, generator limits, voltage constraints and line overloads are considered. The usefulness of computing the sensitivities of these margins with respect to interarea transfers, loading parameters, generator dispatch, transmission line parameters, and VAR support is established for networks as large as 1500 buses. The sensitivity formulas presented apply to a range of power system models. Conventional sensitivity formulas such as line distribution factors, outage distribution factors, participation factors and penalty factors are shown to be special cases of the general sensitivity formulas derived in this thesis. The sensitivity formulas readily accommodate sparse matrix techniques. Margin sensitivity methods are shown to work effectively for avoiding voltage collapse blackouts caused by either saddle node bifurcation of equilibria or immediate instability due to generator reactive power limits. Extremely fast contingency analysis for voltage collapse can be implemented with margin sensitivity based rankings. Interarea transfer can be limited by voltage limits, line limits, or voltage stability. The sensitivity formulas presented in this thesis apply to security margins defined by any limit criteria. A method to compute transfer margins by directly locating intermediate events reduces the total number of loadflow iterations required by each margin computation and provides sensitivity information at minimal additional cost. Estimates of the effect of simultaneous transfers on the transfer margins agree well with the exact computations for a network model derived from a portion of the U.S grid. The accuracy of the estimates over a useful range of conditions and the ease of obtaining the estimates suggest that the sensitivity computations will be of practical value.

  12. Updated Estimates of the Average Financial Return on Master's Degree Programs in the United States

    ERIC Educational Resources Information Center

    Gándara, Denisa; Toutkoushian, Robert K.

    2017-01-01

    In this study, we provide updated estimates of the private and social financial return on enrolling in a master's degree program in the United States. In addition to returns for all fields of study, we show estimated returns to enrolling in master's degree programs in business and education, specifically. We also conduct a sensitivity analysis to…

  13. Fast computation of derivative based sensitivities of PSHA models via algorithmic differentiation

    NASA Astrophysics Data System (ADS)

    Leövey, Hernan; Molkenthin, Christian; Scherbaum, Frank; Griewank, Andreas; Kuehn, Nicolas; Stafford, Peter

    2015-04-01

    Probabilistic seismic hazard analysis (PSHA) is the preferred tool for estimation of potential ground-shaking hazard due to future earthquakes at a site of interest. A modern PSHA represents a complex framework which combines different models with possible many inputs. Sensitivity analysis is a valuable tool for quantifying changes of a model output as inputs are perturbed, identifying critical input parameters and obtaining insight in the model behavior. Differential sensitivity analysis relies on calculating first-order partial derivatives of the model output with respect to its inputs. Moreover, derivative based global sensitivity measures (Sobol' & Kucherenko '09) can be practically used to detect non-essential inputs of the models, thus restricting the focus of attention to a possible much smaller set of inputs. Nevertheless, obtaining first-order partial derivatives of complex models with traditional approaches can be very challenging, and usually increases the computation complexity linearly with the number of inputs appearing in the models. In this study we show how Algorithmic Differentiation (AD) tools can be used in a complex framework such as PSHA to successfully estimate derivative based sensitivities, as is the case in various other domains such as meteorology or aerodynamics, without no significant increase in the computation complexity required for the original computations. First we demonstrate the feasibility of the AD methodology by comparing AD derived sensitivities to analytically derived sensitivities for a basic case of PSHA using a simple ground-motion prediction equation. In a second step, we derive sensitivities via AD for a more complex PSHA study using a ground motion attenuation relation based on a stochastic method to simulate strong motion. The presented approach is general enough to accommodate more advanced PSHA studies of higher complexity.

  14. Sensitivity of Forecast Skill to Different Objective Analysis Schemes

    NASA Technical Reports Server (NTRS)

    Baker, W. E.

    1979-01-01

    Numerical weather forecasts are characterized by rapidly declining skill in the first 48 to 72 h. Recent estimates of the sources of forecast error indicate that the inaccurate specification of the initial conditions contributes substantially to this error. The sensitivity of the forecast skill to the initial conditions is examined by comparing a set of real-data experiments whose initial data were obtained with two different analysis schemes. Results are presented to emphasize the importance of the objective analysis techniques used in the assimilation of observational data.

  15. The Effects of Variability and Risk in Selection Utility Analysis: An Empirical Comparison.

    ERIC Educational Resources Information Center

    Rich, Joseph R.; Boudreau, John W.

    1987-01-01

    Investigated utility estimate variability for the selection utility of using the Programmer Aptitude Test to select computer programmers. Comparison of Monte Carlo results to other risk assessment approaches (sensitivity analysis, break-even analysis, algebraic derivation of the distribtion) suggests that distribution information provided by Monte…

  16. Developing a methodology for the inverse estimation of root architectural parameters from field based sampling schemes

    NASA Astrophysics Data System (ADS)

    Morandage, Shehan; Schnepf, Andrea; Vanderborght, Jan; Javaux, Mathieu; Leitner, Daniel; Laloy, Eric; Vereecken, Harry

    2017-04-01

    Root traits are increasingly important in breading of new crop varieties. E.g., longer and fewer lateral roots are suggested to improve drought resistance of wheat. Thus, detailed root architectural parameters are important. However, classical field sampling of roots only provides more aggregated information such as root length density (coring), root counts per area (trenches) or root arrival curves at certain depths (rhizotubes). We investigate the possibility of obtaining the information about root system architecture of plants using field based classical root sampling schemes, based on sensitivity analysis and inverse parameter estimation. This methodology was developed based on a virtual experiment where a root architectural model was used to simulate root system development in a field, parameterized for winter wheat. This information provided the ground truth which is normally unknown in a real field experiment. The three sampling schemes coring, trenching, and rhizotubes where virtually applied to and aggregated information computed. Morris OAT global sensitivity analysis method was then performed to determine the most sensitive parameters of root architecture model for the three different sampling methods. The estimated means and the standard deviation of elementary effects of a total number of 37 parameters were evaluated. Upper and lower bounds of the parameters were obtained based on literature and published data of winter wheat root architectural parameters. Root length density profiles of coring, arrival curve characteristics observed in rhizotubes, and root counts in grids of trench profile method were evaluated statistically to investigate the influence of each parameter using five different error functions. Number of branches, insertion angle inter-nodal distance, and elongation rates are the most sensitive parameters and the parameter sensitivity varies slightly with the depth. Most parameters and their interaction with the other parameters show highly nonlinear effect to the model output. The most sensitive parameters will be subject to inverse estimation from the virtual field sampling data using DREAMzs algorithm. The estimated parameters can then be compared with the ground truth in order to determine the suitability of the sampling schemes to identify specific traits or parameters of the root growth model.

  17. Improved computer-aided detection of small polyps in CT colonography using interpolation for curvature estimationa

    PubMed Central

    Liu, Jiamin; Kabadi, Suraj; Van Uitert, Robert; Petrick, Nicholas; Deriche, Rachid; Summers, Ronald M.

    2011-01-01

    Purpose: Surface curvatures are important geometric features for the computer-aided analysis and detection of polyps in CT colonography (CTC). However, the general kernel approach for curvature computation can yield erroneous results for small polyps and for polyps that lie on haustral folds. Those erroneous curvatures will reduce the performance of polyp detection. This paper presents an analysis of interpolation’s effect on curvature estimation for thin structures and its application on computer-aided detection of small polyps in CTC. Methods: The authors demonstrated that a simple technique, image interpolation, can improve the accuracy of curvature estimation for thin structures and thus significantly improve the sensitivity of small polyp detection in CTC. Results: Our experiments showed that the merits of interpolating included more accurate curvature values for simulated data, and isolation of polyps near folds for clinical data. After testing on a large clinical data set, it was observed that sensitivities with linear, quadratic B-spline and cubic B-spline interpolations significantly improved the sensitivity for small polyp detection. Conclusions: The image interpolation can improve the accuracy of curvature estimation for thin structures and thus improve the computer-aided detection of small polyps in CTC. PMID:21859029

  18. Cost-utility analysis of duloxetine in osteoarthritis: a US private payer perspective.

    PubMed

    Wielage, Ronald C; Bansal, Megha; Andrews, J Scott; Klein, Robert W; Happich, Michael

    2013-06-01

    Duloxetine has recently been approved in the USA for chronic musculoskeletal pain, including osteoarthritis and chronic low back pain. The cost effectiveness of duloxetine in osteoarthritis has not previously been assessed. Duloxetine is targeted as post first-line (after acetaminophen) treatment of moderate to severe pain. The objective of this study was to estimate the cost effectiveness of duloxetine in the treatment of osteoarthritis from a US private payer perspective compared with other post first-line oral treatments, including nonsteroidal anti-inflammatory drugs (NSAIDs), and both strong and weak opioids. A cost-utility analysis was performed using a discrete-state, time-dependent semi-Markov model based on the National Institute for Health and Clinical Excellence (NICE) model documented in its 2008 osteoarthritis guidelines. The model was extended for opioids by adding titration, discontinuation and additional adverse events (AEs). A life-long time horizon was adopted to capture the full consequences of NSAID-induced AEs. Fourteen health states comprised the structure of the model: treatment without persistent AE, six during-AE states, six post-AE states and death. Treatment-specific utilities were calculated using the transfer-to-utility method and Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) total scores from a meta-analysis of osteoarthritis clinical trials of 12 weeks and longer. Costs for 2011 were estimated using Red Book, The Agency for Healthcare Research and Quality's Healthcare Cost and Utilization Project database, the literature and, sparingly, expert opinion. One-way and probabilistic sensitivity analyses were undertaken, as well as subgroup analyses of patients over 65 years old and a population at greater risk of NSAID-related AEs. In the base case the model estimated naproxen to be the lowest total-cost treatment, tapentadol the highest cost, and duloxetine the most effective after considering AEs. Duloxetine accumulated 0.027 discounted quality-adjusted life-years (QALYs) more than naproxen and 0.013 more than oxycodone. Celecoxib was dominated by naproxen, tramadol was subject to extended dominance, and strong opioids were dominated by duloxetine. The model estimated an incremental cost-effectiveness ratio (ICER) of US$47,678 per QALY for duloxetine versus naproxen. One-way sensitivity analysis identified the probabilities of NSAID-related cardiovascular AEs as the inputs to which the ICER was most sensitive when duloxetine was compared with an NSAID. When compared with a strong opioid, duloxetine dominated the opioid under nearly all sensitivity analysis scenarios. When compared with tramadol, the ICER was most sensitive to the costs of duloxetine and tramadol. In subgroup analysis, the cost per QALY for duloxetine versus naproxen fell to US$24,125 for patients over 65 years and to US$18,472 for a population at high risk of cardiovascular and gastrointestinal AEs. The model estimated that duloxetine was potentially cost effective in the base-case population and more cost effective for subgroups over 65 years or at high risk of NSAID-related AEs. In sensitivity analysis, duloxetine dominated all strong opioids in nearly all scenarios.

  19. Estimate of the direct and indirect annual cost of bacterial conjunctivitis in the United States

    PubMed Central

    2009-01-01

    Background The aim of this study was to estimate both the direct and indirect annual costs of treating bacterial conjunctivitis (BC) in the United States. This was a cost of illness study performed from a U.S. healthcare payer perspective. Methods A comprehensive review of the medical literature was supplemented by data on the annual incidence of BC which was obtained from an analysis of the National Ambulatory Medical Care Survey (NAMCS) database for the year 2005. Cost estimates for medical visits and laboratory or diagnostic tests were derived from published Medicare CPT fee codes. The cost of prescription drugs was obtained from standard reference sources. Indirect costs were calculated as those due to lost productivity. Due to the acute nature of BC, no cost discounting was performed. All costs are expressed in 2007 U.S. dollars. Results The number of BC cases in the U.S. for 2005 was estimated at approximately 4 million yielding an estimated annual incidence rate of 135 per 10,000. Base-case analysis estimated the total direct and indirect cost of treating patients with BC in the United States at $ 589 million. One- way sensitivity analysis, assuming either a 20% variation in the annual incidence of BC or treatment costs, generated a cost range of $ 469 million to $ 705 million. Two-way sensitivity analysis, assuming a 20% variation in both the annual incidence of BC and treatment costs occurring simultaneously, resulted in an estimated cost range of $ 377 million to $ 857 million. Conclusion The economic burden posed by BC is significant. The findings may prove useful to decision makers regarding the allocation of healthcare resources necessary to address the economic burden of BC in the United States. PMID:19939250

  20. System parameter identification from projection of inverse analysis

    NASA Astrophysics Data System (ADS)

    Liu, K.; Law, S. S.; Zhu, X. Q.

    2017-05-01

    The output of a system due to a change of its parameters is often approximated with the sensitivity matrix from the first order Taylor series. The system output can be measured in practice, but the perturbation in the system parameters is usually not available. Inverse sensitivity analysis can be adopted to estimate the unknown system parameter perturbation from the difference between the observation output data and corresponding analytical output data calculated from the original system model. The inverse sensitivity analysis is re-visited in this paper with improvements based on the Principal Component Analysis on the analytical data calculated from the known system model. The identification equation is projected into a subspace of principal components of the system output, and the sensitivity of the inverse analysis is improved with an iterative model updating procedure. The proposed method is numerical validated with a planar truss structure and dynamic experiments with a seven-storey planar steel frame. Results show that it is robust to measurement noise, and the location and extent of stiffness perturbation can be identified with better accuracy compared with the conventional response sensitivity-based method.

  1. Bayesian Sensitivity Analysis of Statistical Models with Missing Data

    PubMed Central

    ZHU, HONGTU; IBRAHIM, JOSEPH G.; TANG, NIANSHENG

    2013-01-01

    Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures. PMID:24753718

  2. Sensitivity of FIA Volume Estimates to Changes in Stratum Weights and Number of Strata

    Treesearch

    James A. Westfall; Michael Hoppus

    2005-01-01

    In the Northeast region, the USDA Forest Service Forest Inventory and Analysis (FIA) program utilizes stratified sampling techniques to improve the precision of population estimates. Recently, interpretation of aerial photographs was replaced with classified remotely sensed imagery to determine stratum weights and plot stratum assignments. However, stratum weights...

  3. Emission rate modeling and risk assessment at an automobile plant from painting operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, A.; Shrivastava, A.; Kulkarni, A.

    Pollution from automobile plants from painting operations has been addressed in the Clean Act Amendments (1990). The estimation of pollutant emissions from automobile painting operation were done mostly by approximate procedures than by actual calculations. The purpose of this study was to develop a methodology for calculating the emissions of the pollutants from painting operation in an automobile plant. Five scenarios involving an automobile painting operation, located in Columbus (Ohio), were studied for pollutant emission and concomitant risk associated with that. In the study of risk, a sensitivity analysis was done using Crystal Ball{reg{underscore}sign} on the parameters involved in risk.more » This software uses the Monte Carlo principle. The most sensitive factor in the risk analysis was the ground level concentration of the pollutants. All scenarios studied met the safety goal (a risk value of 1 x 10{sup {minus}6}) with different confidence levels. The highest level of confidence in meeting the safety goal was displayed by Scenario 1 (Alpha Industries). The results from the scenarios suggest that risk is associated with the quantity of released toxic pollutants. The sensitivity analysis of the various parameter shows that average spray rate of paint is the most important parameter in the estimation of pollutants from the painting operations. The entire study is a complete module that can be used by the environmental pollution control agencies for estimation of pollution levels and estimation of associated risk. The study can be further extended to other operations in an automobile industry or to different industries.« less

  4. Surrogacy of progression-free survival (PFS) for overall survival (OS) in esophageal cancer trials with preoperative therapy: Literature-based meta-analysis.

    PubMed

    Kataoka, K; Nakamura, K; Mizusawa, J; Kato, K; Eba, J; Katayama, H; Shibata, T; Fukuda, H

    2017-10-01

    There have been no reports evaluating progression-free survival (PFS) as a surrogate endpoint in resectable esophageal cancer. This study was conducted to evaluate the trial level correlations between PFS and overall survival (OS) in resectable esophageal cancer with preoperative therapy and to explore the potential benefit of PFS as a surrogate endpoint for OS. A systematic literature search of randomized trials with preoperative chemotherapy or preoperative chemoradiotherapy for esophageal cancer reported from January 1990 to September 2014 was conducted using PubMed and the Cochrane Library. Weighted linear regression using sample size of each trial as a weight was used to estimate coefficient of determination (R 2 ) within PFS and OS. The primary analysis included trials in which the HR for both PFS and OS was reported. The sensitivity analysis included trials in which either HR or median survival time of PFS and OS was reported. In the sensitivity analysis, HR was estimated from the median survival time of PFS and OS, assuming exponential distribution. Of 614 articles, 10 trials were selected for the primary analysis and 15 for the sensitivity analysis. The primary analysis did not show a correlation between treatment effects on PFS and OS (R 2 0.283, 95% CI [0.00-0.90]). The sensitivity analysis did not show an association between PFS and OS (R 2 0.084, 95% CI [0.00-0.70]). Although the number of randomized controlled trials evaluating preoperative therapy for esophageal cancer is limited at the moment, PFS is not suitable for primary endpoint as a surrogate endpoint for OS. Copyright © 2017 Elsevier Ltd, BASO ~ The Association for Cancer Surgery, and the European Society of Surgical Oncology. All rights reserved.

  5. Cross-National Estimates of the Effects of Family Background on Student Achievement: A Sensitivity Analysis

    ERIC Educational Resources Information Center

    Nonoyama-Tarumi, Yuko

    2008-01-01

    This article uses the data from the Programme for International Student Assessment (PISA) 2000 to examine whether the influence of family background on educational achievement is sensitive to different measures of the family's socio-economic status (SES). The study finds that, when a multidimensional measure of SES is used, the family background…

  6. Sensitivity Analysis for some Water Pollution Problem

    NASA Astrophysics Data System (ADS)

    Le Dimet, François-Xavier; Tran Thu, Ha; Hussaini, Yousuff

    2014-05-01

    Sensitivity Analysis for Some Water Pollution Problems Francois-Xavier Le Dimet1 & Tran Thu Ha2 & M. Yousuff Hussaini3 1Université de Grenoble, France, 2Vietnamese Academy of Sciences, 3 Florida State University Sensitivity analysis employs some response function and the variable with respect to which its sensitivity is evaluated. If the state of the system is retrieved through a variational data assimilation process, then the observation appears only in the Optimality System (OS). In many cases, observations have errors and it is important to estimate their impact. Therefore, sensitivity analysis has to be carried out on the OS, and in that sense sensitivity analysis is a second order property. The OS can be considered as a generalized model because it contains all the available information. This presentation proposes a method to carry out sensitivity analysis in general. The method is demonstrated with an application to water pollution problem. The model involves shallow waters equations and an equation for the pollutant concentration. These equations are discretized using a finite volume method. The response function depends on the pollutant source, and its sensitivity with respect to the source term of the pollutant is studied. Specifically, we consider: • Identification of unknown parameters, and • Identification of sources of pollution and sensitivity with respect to the sources. We also use a Singular Evolutive Interpolated Kalman Filter to study this problem. The presentation includes a comparison of the results from these two methods. .

  7. Analysis and comparison of sleeping posture classification methods using pressure sensitive bed system.

    PubMed

    Hsia, C C; Liou, K J; Aung, A P W; Foo, V; Huang, W; Biswas, J

    2009-01-01

    Pressure ulcers are common problems for bedridden patients. Caregivers need to reposition the sleeping posture of a patient every two hours in order to reduce the risk of getting ulcers. This study presents the use of Kurtosis and skewness estimation, principal component analysis (PCA) and support vector machines (SVMs) for sleeping posture classification using cost-effective pressure sensitive mattress that can help caregivers to make correct sleeping posture changes for the prevention of pressure ulcers.

  8. Policy impacts estimates are sensitive to data selection in empirical analysis: evidence from the United States – Canada softwood lumber trade dispute

    Treesearch

    Daowei Zhang; Rajan Parajuli

    2016-01-01

    In this paper, we use the U.S. softwood lumber import demand model as a case study to show that the effects of past trade policies are sensitive to the data sample used in empirical analyses.  We conclude that, to be consistent with the purpose of analysis of policy and to ensure all else being equal, policy impacts can only be judged by using data up to the time when...

  9. Clinical evaluation and validation of laboratory methods for the diagnosis of Bordetella pertussis infection: Culture, polymerase chain reaction (PCR) and anti-pertussis toxin IgG serology (IgG-PT)

    PubMed Central

    Cassiday, Pamela K.; Pawloski, Lucia C.; Tatti, Kathleen M.; Martin, Monte D.; Briere, Elizabeth C.; Tondella, M. Lucia; Martin, Stacey W.

    2018-01-01

    Introduction The appropriate use of clinically accurate diagnostic tests is essential for the detection of pertussis, a poorly controlled vaccine-preventable disease. The purpose of this study was to estimate the sensitivity and specificity of different diagnostic criteria including culture, multi-target polymerase chain reaction (PCR), anti-pertussis toxin IgG (IgG-PT) serology, and the use of a clinical case definition. An additional objective was to describe the optimal timing of specimen collection for the various tests. Methods Clinical specimens were collected from patients with cough illness at seven locations across the United States between 2007 and 2011. Nasopharyngeal and blood specimens were collected from each patient during the enrollment visit. Patients who had been coughing for ≤ 2 weeks were asked to return in 2–4 weeks for collection of a second, convalescent blood specimen. Sensitivity and specificity of each diagnostic test were estimated using three methods—pertussis culture as the “gold standard,” composite reference standard analysis (CRS), and latent class analysis (LCA). Results Overall, 868 patients were enrolled and 13.6% were B. pertussis positive by at least one diagnostic test. In a sample of 545 participants with non-missing data on all four diagnostic criteria, culture was 64.0% sensitive, PCR was 90.6% sensitive, and both were 100% specific by LCA. CRS and LCA methods increased the sensitivity estimates for convalescent serology and the clinical case definition over the culture-based estimates. Culture and PCR were most sensitive when performed during the first two weeks of cough; serology was optimally sensitive after the second week of cough. Conclusions Timing of specimen collection in relation to onset of illness should be considered when ordering diagnostic tests for pertussis. Consideration should be given to including IgG-PT serology as a confirmatory test in the Council of State and Territorial Epidemiologists (CSTE) case definition for pertussis. PMID:29652945

  10. Verification bias: an under-recognized source of error in assessing the efficacy of MRI of the meniscii.

    PubMed

    Richardson, Michael L; Petscavage, Jonelle M

    2011-11-01

    The sensitivity and specificity of magnetic resonance imaging (MRI) for diagnosis of meniscal tears has been studied extensively, with tears usually verified by surgery. However, surgically unverified cases are often not considered in these studies, leading to verification bias, which can falsely increase the sensitivity and decrease the specificity estimates. Our study suggests that such bias may be very common in the meniscal MRI literature, and illustrates techniques to detect and correct for such bias. PubMed was searched for articles estimating sensitivity and specificity of MRI for meniscal tears. These were assessed for verification bias, deemed potentially present if a study included any patients whose MRI findings were not surgically verified. Retrospective global sensitivity analysis (GSA) was performed when possible. Thirty-nine of the 314 studies retrieved from PubMed specifically dealt with meniscal tears. All 39 included unverified patients, and hence, potential verification bias. Only seven articles included sufficient information to perform GSA. Of these, one showed definite verification bias, two showed no bias, and four others showed bias within certain ranges of disease prevalence. Only 9 of 39 acknowledged the possibility of verification bias. Verification bias is underrecognized and potentially common in published estimates of the sensitivity and specificity of MRI for the diagnosis of meniscal tears. When possible, it should be avoided by proper study design. If unavoidable, it should be acknowledged. Investigators should tabulate unverified as well as verified data. Finally, verification bias should be estimated; if present, corrected estimates of sensitivity and specificity should be used. Our online web-based calculator makes this process relatively easy. Copyright © 2011 AUR. Published by Elsevier Inc. All rights reserved.

  11. Systematic review and meta-analysis of the performance of clinical risk assessment instruments for screening for osteoporosis or low bone density

    PubMed Central

    Edwards, D. L.; Saleh, A. A.; Greenspan, S. L.

    2015-01-01

    Summary We performed a systematic review and meta-analysis of the performance of clinical risk assessment instruments for screening for DXA-determined osteoporosis or low bone density. Commonly evaluated risk instruments showed high sensitivity approaching or exceeding 90 % at particular thresholds within various populations but low specificity at thresholds required for high sensitivity. Simpler instruments, such as OST, generally performed as well as or better than more complex instruments. Introduction The purpose of the study is to systematically review the performance of clinical risk assessment instruments for screening for dual-energy X-ray absorptiometry (DXA)-determined osteoporosis or low bone density. Methods Systematic review and meta-analysis were performed. Multiple literature sources were searched, and data extracted and analyzed from included references. Results One hundred eight references met inclusion criteria. Studies assessed many instruments in 34 countries, most commonly the Osteoporosis Self-Assessment Tool (OST), the Simple Calculated Osteoporosis Risk Estimation (SCORE) instrument, the Osteoporosis Self-Assessment Tool for Asians (OSTA), the Osteoporosis Risk Assessment Instrument (ORAI), and body weight criteria. Meta-analyses of studies evaluating OST using a cutoff threshold of <1 to identify US postmenopausal women with osteoporosis at the femoral neck provided summary sensitivity and specificity estimates of 89 % (95%CI 82–96 %) and 41 % (95%CI 23–59 %), respectively. Meta-analyses of studies evaluating OST using a cutoff threshold of 3 to identify US men with osteoporosis at the femoral neck, total hip, or lumbar spine provided summary sensitivity and specificity estimates of 88 % (95%CI 79–97 %) and 55 % (95%CI 42–68 %), respectively. Frequently evaluated instruments each had thresholds and populations for which sensitivity for osteoporosis or low bone mass detection approached or exceeded 90 % but always with a trade-off of relatively low specificity. Conclusions Commonly evaluated clinical risk assessment instruments each showed high sensitivity approaching or exceeding 90 % for identifying individuals with DXA-determined osteoporosis or low BMD at certain thresholds in different populations but low specificity at thresholds required for high sensitivity. Simpler instruments, such as OST, generally performed as well as or better than more complex instruments. PMID:25644147

  12. A lower and more constrained estimate of climate sensitivity using updated observations and detailed radiative forcing time series

    NASA Astrophysics Data System (ADS)

    Skeie, R. B.; Berntsen, T.; Aldrin, M.; Holden, M.; Myhre, G.

    2012-04-01

    A key question in climate science is to quantify the sensitivity of the climate system to perturbation in the radiative forcing (RF). This sensitivity is often represented by the equilibrium climate sensitivity, but this quantity is poorly constrained with significant probabilities for high values. In this work the equilibrium climate sensitivity (ECS) is estimated based on observed near-surface temperature change from the instrumental record, changes in ocean heat content and detailed RF time series. RF time series from pre-industrial times to 2010 for all main anthropogenic and natural forcing mechanisms are estimated and the cloud lifetime effect and the semi-direct effect, which are not RF mechanisms in a strict sense, are included in the analysis. The RF time series are linked to the observations of ocean heat content and temperature change through an energy balance model and a stochastic model, using a Bayesian approach to estimate the ECS from the data. The posterior mean of the ECS is 1.9˚C with 90% credible interval (C.I.) ranging from 1.2 to 2.9˚C, which is tighter than previously published estimates. Observational data up to and including year 2010 are used in this study. This is at least ten additional years compared to the majority of previously published studies that have used the instrumental record in attempts to constrain the ECS. We show that the additional 10 years of data, and especially 10 years of additional ocean heat content data, have significantly narrowed the probability density function of the ECS. If only data up to and including year 2000 are used in the analysis, the 90% C.I. is 1.4 to 10.6˚C with a pronounced heavy tail in line with previous estimates of ECS constrained by observations in the 20th century. Also the transient climate response (TCR) is estimated in this study. Using observational data up to and including year 2010 gives a 90% C.I. of 1.0 to 2.1˚C, while the 90% C.I. is significantly broader ranging from 1.1 to 3.4 ˚C if only data up to and including year 2000 is used.

  13. Sensitivity Analysis on Remote Sensing Evapotranspiration Algorithm of Surface Energy Balance for Land

    NASA Astrophysics Data System (ADS)

    Wang, J.; Samms, T.; Meier, C.; Simmons, L.; Miller, D.; Bathke, D.

    2005-12-01

    Spatial evapotranspiration (ET) is usually estimated by Surface Energy Balance Algorithm for Land. The average accuracy of the algorithm is 85% on daily basis and 95% on seasonable basis. However, the accuracy of the algorithm varies from 67% to 95% on instantaneous ET estimates and, as reported in 18 studies, 70% to 98% on 1 to 10-day ET estimates. There is a need to understand the sensitivity of the ET calculation with respect to the algorithm variables and equations. With an increased understanding, information can be developed to improve the algorithm, and to better identify the key variables and equations. A Modified Surface Energy Balance Algorithm for Land (MSEBAL) was developed and validated with data from a pecan orchard and an alfalfa field. The MSEBAL uses ground reflectance and temperature data from ASTER sensors along with humidity, wind speed, and solar radiation data from a local weather station. MSEBAL outputs hourly and daily ET with 90 m by 90 m resolution. A sensitivity analysis was conducted for MSEBAL on ET calculation. In order to observe the sensitivity of the calculation to a particular variable, the value of that variable was changed while holding the magnitudes of the other variables. The key variables and equations to which the ET calculation most sensitive were determined in this study. href='http://weather.nmsu.edu/pecans/SEBALFolder/San%20Francisco%20AGU%20meeting/ASensitivityAnalysisonMSE">http://weather.nmsu.edu/pecans/SEBALFolder/San%20Francisco%20AGU%20meeting/ASensitivityAnalysisonMSE

  14. Age-Related Differences in Susceptibility to Carcinogenesis. II. Approaches for Application and Uncertainty Analyses for Individual Genetically Acting Carcinogens

    PubMed Central

    Hattis, Dale; Goble, Robert; Chu, Margaret

    2005-01-01

    In an earlier report we developed a quantitative likelihood-based analysis of the differences in sensitivity of rodents to mutagenic carcinogens across three life stages (fetal, birth to weaning, and weaning to 60 days) relative to exposures in adult life. Here we draw implications for assessing human risks for full lifetime exposures, taking into account three types of uncertainties in making projections from the rodent data: uncertainty in the central estimates of the life-stage–specific sensitivity factors estimated earlier, uncertainty from chemical-to-chemical differences in life-stage–specific sensitivities for carcinogenesis, and uncertainty in the mapping of rodent life stages to human ages/exposure periods. Among the uncertainties analyzed, the mapping of rodent life stages to human ages/exposure periods is most important quantitatively (a range of several-fold in estimates of the duration of the human equivalent of the highest sensitivity “birth to weaning” period in rodents). The combined effects of these uncertainties are estimated with Monte Carlo analyses. Overall, the estimated population arithmetic mean risk from lifetime exposures at a constant milligrams per kilogram body weight level to a generic mutagenic carcinogen is about 2.8-fold larger than expected from adult-only exposure with 5–95% confidence limits of 1.5-to 6-fold. The mean estimates for the 0- to 2-year and 2- to 15-year periods are about 35–55% larger than the 10- and 3-fold sensitivity factor adjustments recently proposed by the U.S. Environmental Protection Agency. The present results are based on data for only nine chemicals, including five mutagens. Risk inferences will be altered as data become available for other chemicals. PMID:15811844

  15. The Validity of Conscientiousness Is Overestimated in the Prediction of Job Performance.

    PubMed

    Kepes, Sven; McDaniel, Michael A

    2015-01-01

    Sensitivity analyses refer to investigations of the degree to which the results of a meta-analysis remain stable when conditions of the data or the analysis change. To the extent that results remain stable, one can refer to them as robust. Sensitivity analyses are rarely conducted in the organizational science literature. Despite conscientiousness being a valued predictor in employment selection, sensitivity analyses have not been conducted with respect to meta-analytic estimates of the correlation (i.e., validity) between conscientiousness and job performance. To address this deficiency, we reanalyzed the largest collection of conscientiousness validity data in the personnel selection literature and conducted a variety of sensitivity analyses. Publication bias analyses demonstrated that the validity of conscientiousness is moderately overestimated (by around 30%; a correlation difference of about .06). The misestimation of the validity appears to be due primarily to suppression of small effects sizes in the journal literature. These inflated validity estimates result in an overestimate of the dollar utility of personnel selection by millions of dollars and should be of considerable concern for organizations. The fields of management and applied psychology seldom conduct sensitivity analyses. Through the use of sensitivity analyses, this paper documents that the existing literature overestimates the validity of conscientiousness in the prediction of job performance. Our data show that effect sizes from journal articles are largely responsible for this overestimation.

  16. The Validity of Conscientiousness Is Overestimated in the Prediction of Job Performance

    PubMed Central

    2015-01-01

    Introduction Sensitivity analyses refer to investigations of the degree to which the results of a meta-analysis remain stable when conditions of the data or the analysis change. To the extent that results remain stable, one can refer to them as robust. Sensitivity analyses are rarely conducted in the organizational science literature. Despite conscientiousness being a valued predictor in employment selection, sensitivity analyses have not been conducted with respect to meta-analytic estimates of the correlation (i.e., validity) between conscientiousness and job performance. Methods To address this deficiency, we reanalyzed the largest collection of conscientiousness validity data in the personnel selection literature and conducted a variety of sensitivity analyses. Results Publication bias analyses demonstrated that the validity of conscientiousness is moderately overestimated (by around 30%; a correlation difference of about .06). The misestimation of the validity appears to be due primarily to suppression of small effects sizes in the journal literature. These inflated validity estimates result in an overestimate of the dollar utility of personnel selection by millions of dollars and should be of considerable concern for organizations. Conclusion The fields of management and applied psychology seldom conduct sensitivity analyses. Through the use of sensitivity analyses, this paper documents that the existing literature overestimates the validity of conscientiousness in the prediction of job performance. Our data show that effect sizes from journal articles are largely responsible for this overestimation. PMID:26517553

  17. Age estimation by assessment of pulp chamber volume: a Bayesian network for the evaluation of dental evidence.

    PubMed

    Sironi, Emanuele; Taroni, Franco; Baldinotti, Claudio; Nardi, Cosimo; Norelli, Gian-Aristide; Gallidabino, Matteo; Pinchi, Vilma

    2017-11-14

    The present study aimed to investigate the performance of a Bayesian method in the evaluation of dental age-related evidence collected by means of a geometrical approximation procedure of the pulp chamber volume. Measurement of this volume was based on three-dimensional cone beam computed tomography images. The Bayesian method was applied by means of a probabilistic graphical model, namely a Bayesian network. Performance of that method was investigated in terms of accuracy and bias of the decisional outcomes. Influence of an informed elicitation of the prior belief of chronological age was also studied by means of a sensitivity analysis. Outcomes in terms of accuracy were adequate with standard requirements for forensic adult age estimation. Findings also indicated that the Bayesian method does not show a particular tendency towards under- or overestimation of the age variable. Outcomes of the sensitivity analysis showed that results on estimation are improved with a ration elicitation of the prior probabilities of age.

  18. Sensitivity Analysis in Sequential Decision Models.

    PubMed

    Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

    2017-02-01

    Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

  19. Critique and sensitivity analysis of the compensation function used in the LMS Hudson River striped bass models. Environmental Sciences Division publication No. 944

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Winkle, W.; Christensen, S.W.; Kauffman, G.

    1976-12-01

    The description and justification for the compensation function developed and used by Lawler, Matusky and Skelly Engineers (LMS) (under contract to Consolidated Edison Company of New York) in their Hudson River striped bass models are presented. A sensitivity analysis of this compensation function is reported, based on computer runs with a modified version of the LMS completely mixed (spatially homogeneous) model. Two types of sensitivity analysis were performed: a parametric study involving at least five levels for each of the three parameters in the compensation function, and a study of the form of the compensation function itself, involving comparison ofmore » the LMS function with functions having no compensation at standing crops either less than or greater than the equilibrium standing crops. For the range of parameter values used in this study, estimates of percent reduction are least sensitive to changes in YS, the equilibrium standing crop, and most sensitive to changes in KXO, the minimum mortality rate coefficient. Eliminating compensation at standing crops either less than or greater than the equilibrium standing crops results in higher estimates of percent reduction. For all values of KXO and for values of YS and KX at and above the baseline values, eliminating compensation at standing crops less than the equilibrium standing crops results in a greater increase in percent reduction than eliminating compensation at standing crops greater than the equilibrium standing crops.« less

  20. Meta-analysis of diagnostic accuracy studies in mental health

    PubMed Central

    Takwoingi, Yemisi; Riley, Richard D; Deeks, Jonathan J

    2015-01-01

    Objectives To explain methods for data synthesis of evidence from diagnostic test accuracy (DTA) studies, and to illustrate different types of analyses that may be performed in a DTA systematic review. Methods We described properties of meta-analytic methods for quantitative synthesis of evidence. We used a DTA review comparing the accuracy of three screening questionnaires for bipolar disorder to illustrate application of the methods for each type of analysis. Results The discriminatory ability of a test is commonly expressed in terms of sensitivity (proportion of those with the condition who test positive) and specificity (proportion of those without the condition who test negative). There is a trade-off between sensitivity and specificity, as an increasing threshold for defining test positivity will decrease sensitivity and increase specificity. Methods recommended for meta-analysis of DTA studies --such as the bivariate or hierarchical summary receiver operating characteristic (HSROC) model --jointly summarise sensitivity and specificity while taking into account this threshold effect, as well as allowing for between study differences in test performance beyond what would be expected by chance. The bivariate model focuses on estimation of a summary sensitivity and specificity at a common threshold while the HSROC model focuses on the estimation of a summary curve from studies that have used different thresholds. Conclusions Meta-analyses of diagnostic accuracy studies can provide answers to important clinical questions. We hope this article will provide clinicians with sufficient understanding of the terminology and methods to aid interpretation of systematic reviews and facilitate better patient care. PMID:26446042

  1. Sensitivity analyses for sparse-data problems-using weakly informative bayesian priors.

    PubMed

    Hamra, Ghassan B; MacLehose, Richard F; Cole, Stephen R

    2013-03-01

    Sparse-data problems are common, and approaches are needed to evaluate the sensitivity of parameter estimates based on sparse data. We propose a Bayesian approach that uses weakly informative priors to quantify sensitivity of parameters to sparse data. The weakly informative prior is based on accumulated evidence regarding the expected magnitude of relationships using relative measures of disease association. We illustrate the use of weakly informative priors with an example of the association of lifetime alcohol consumption and head and neck cancer. When data are sparse and the observed information is weak, a weakly informative prior will shrink parameter estimates toward the prior mean. Additionally, the example shows that when data are not sparse and the observed information is not weak, a weakly informative prior is not influential. Advancements in implementation of Markov Chain Monte Carlo simulation make this sensitivity analysis easily accessible to the practicing epidemiologist.

  2. Sensitivity Analyses for Sparse-Data Problems—Using Weakly Informative Bayesian Priors

    PubMed Central

    Hamra, Ghassan B.; MacLehose, Richard F.; Cole, Stephen R.

    2013-01-01

    Sparse-data problems are common, and approaches are needed to evaluate the sensitivity of parameter estimates based on sparse data. We propose a Bayesian approach that uses weakly informative priors to quantify sensitivity of parameters to sparse data. The weakly informative prior is based on accumulated evidence regarding the expected magnitude of relationships using relative measures of disease association. We illustrate the use of weakly informative priors with an example of the association of lifetime alcohol consumption and head and neck cancer. When data are sparse and the observed information is weak, a weakly informative prior will shrink parameter estimates toward the prior mean. Additionally, the example shows that when data are not sparse and the observed information is not weak, a weakly informative prior is not influential. Advancements in implementation of Markov Chain Monte Carlo simulation make this sensitivity analysis easily accessible to the practicing epidemiologist. PMID:23337241

  3. Hydrogen from coal cost estimation guidebook

    NASA Technical Reports Server (NTRS)

    Billings, R. E.

    1981-01-01

    In an effort to establish baseline information whereby specific projects can be evaluated, a current set of parameters which are typical of coal gasification applications was developed. Using these parameters a computer model allows researchers to interrelate cost components in a sensitivity analysis. The results make possible an approximate estimation of hydrogen energy economics from coal, under a variety of circumstances.

  4. Lucid dreaming incidence: A quality effects meta-analysis of 50years of research.

    PubMed

    Saunders, David T; Roe, Chris A; Smith, Graham; Clegg, Helen

    2016-07-01

    We report a quality effects meta-analysis on studies from the period 1966-2016 measuring either (a) lucid dreaming prevalence (one or more lucid dreams in a lifetime); (b) frequent lucid dreaming (one or more lucid dreams in a month) or both. A quality effects meta-analysis allows for the minimisation of the influence of study methodological quality on overall model estimates. Following sensitivity analysis, a heterogeneous lucid dreaming prevalence data set of 34 studies yielded a mean estimate of 55%, 95% C. I. [49%, 62%] for which moderator analysis showed no systematic bias for suspected sources of variability. A heterogeneous lucid dreaming frequency data set of 25 studies yielded a mean estimate of 23%, 95% C. I. [20%, 25%], moderator analysis revealed no suspected sources of variability. These findings are consistent with earlier estimates of lucid dreaming prevalence and frequent lucid dreaming in the population but are based on more robust evidence. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. A new framework for comprehensive, robust, and efficient global sensitivity analysis: 2. Application

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin V.

    2016-01-01

    Based on the theoretical framework for sensitivity analysis called "Variogram Analysis of Response Surfaces" (VARS), developed in the companion paper, we develop and implement a practical "star-based" sampling strategy (called STAR-VARS), for the application of VARS to real-world problems. We also develop a bootstrap approach to provide confidence level estimates for the VARS sensitivity metrics and to evaluate the reliability of inferred factor rankings. The effectiveness, efficiency, and robustness of STAR-VARS are demonstrated via two real-data hydrological case studies (a 5-parameter conceptual rainfall-runoff model and a 45-parameter land surface scheme hydrology model), and a comparison with the "derivative-based" Morris and "variance-based" Sobol approaches are provided. Our results show that STAR-VARS provides reliable and stable assessments of "global" sensitivity across the full range of scales in the factor space, while being 1-2 orders of magnitude more efficient than the Morris or Sobol approaches.

  6. 49 CFR Appendix B to Part 236 - Risk Assessment Criteria

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... availability calculations for subsystems and components, Fault Tree Analysis (FTA) of the subsystems, and... upper bound, as estimated with a sensitivity analysis, and the risk value selected must be demonstrated... interconnected subsystems/components? The risk assessment of each safety-critical system (product) must account...

  7. 49 CFR Appendix B to Part 236 - Risk Assessment Criteria

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... availability calculations for subsystems and components, Fault Tree Analysis (FTA) of the subsystems, and... upper bound, as estimated with a sensitivity analysis, and the risk value selected must be demonstrated... interconnected subsystems/components? The risk assessment of each safety-critical system (product) must account...

  8. Analyzing small data sets using Bayesian estimation: the case of posttraumatic stress symptoms following mechanical ventilation in burn survivors

    PubMed Central

    van de Schoot, Rens; Broere, Joris J.; Perryck, Koen H.; Zondervan-Zwijnenburg, Mariëlle; van Loey, Nancy E.

    2015-01-01

    Background The analysis of small data sets in longitudinal studies can lead to power issues and often suffers from biased parameter values. These issues can be solved by using Bayesian estimation in conjunction with informative prior distributions. By means of a simulation study and an empirical example concerning posttraumatic stress symptoms (PTSS) following mechanical ventilation in burn survivors, we demonstrate the advantages and potential pitfalls of using Bayesian estimation. Methods First, we show how to specify prior distributions and by means of a sensitivity analysis we demonstrate how to check the exact influence of the prior (mis-) specification. Thereafter, we show by means of a simulation the situations in which the Bayesian approach outperforms the default, maximum likelihood and approach. Finally, we re-analyze empirical data on burn survivors which provided preliminary evidence of an aversive influence of a period of mechanical ventilation on the course of PTSS following burns. Results Not suprisingly, maximum likelihood estimation showed insufficient coverage as well as power with very small samples. Only when Bayesian analysis, in conjunction with informative priors, was used power increased to acceptable levels. As expected, we showed that the smaller the sample size the more the results rely on the prior specification. Conclusion We show that two issues often encountered during analysis of small samples, power and biased parameters, can be solved by including prior information into Bayesian analysis. We argue that the use of informative priors should always be reported together with a sensitivity analysis. PMID:25765534

  9. Analyzing small data sets using Bayesian estimation: the case of posttraumatic stress symptoms following mechanical ventilation in burn survivors.

    PubMed

    van de Schoot, Rens; Broere, Joris J; Perryck, Koen H; Zondervan-Zwijnenburg, Mariëlle; van Loey, Nancy E

    2015-01-01

    Background : The analysis of small data sets in longitudinal studies can lead to power issues and often suffers from biased parameter values. These issues can be solved by using Bayesian estimation in conjunction with informative prior distributions. By means of a simulation study and an empirical example concerning posttraumatic stress symptoms (PTSS) following mechanical ventilation in burn survivors, we demonstrate the advantages and potential pitfalls of using Bayesian estimation. Methods : First, we show how to specify prior distributions and by means of a sensitivity analysis we demonstrate how to check the exact influence of the prior (mis-) specification. Thereafter, we show by means of a simulation the situations in which the Bayesian approach outperforms the default, maximum likelihood and approach. Finally, we re-analyze empirical data on burn survivors which provided preliminary evidence of an aversive influence of a period of mechanical ventilation on the course of PTSS following burns. Results : Not suprisingly, maximum likelihood estimation showed insufficient coverage as well as power with very small samples. Only when Bayesian analysis, in conjunction with informative priors, was used power increased to acceptable levels. As expected, we showed that the smaller the sample size the more the results rely on the prior specification. Conclusion : We show that two issues often encountered during analysis of small samples, power and biased parameters, can be solved by including prior information into Bayesian analysis. We argue that the use of informative priors should always be reported together with a sensitivity analysis.

  10. Selecting focal species as surrogates for imperiled species using relative sensitivities derived from occupancy analysis

    USGS Publications Warehouse

    Silvano, Amy; Guyer, Craig; Steury, Todd; Grand, James B.

    2017-01-01

    Most imperiled species are rare or elusive and difficult to detect, which makes gathering data to estimate their response to habitat restoration a challenge. We used a repeatable, systematic method for selecting focal species using relative sensitivities derived from occupancy analysis. Our objective was to select suites of focal species that would be useful as surrogates when predicting effects of restoration of habitat characteristics preferred by imperiled species. We developed 27 habitat profiles that represent general habitat relationships for 118 imperiled species. We identified 23 regularly encountered species that were sensitive to important aspects of those profiles. We validated our approach by examining the correlation between estimated probabilities of occupancy for species of concern and focal species selected using our method. Occupancy rates of focal species were more related to occupancy rates of imperiled species when they were sensitive to more of the parameters appearing in profiles of imperiled species. We suggest that this approach can be an effective means of predicting responses by imperiled species to proposed management actions. However, adequate monitoring will be required to determine the effectiveness of using focal species to guide management actions.

  11. Post-Optimality Analysis In Aerospace Vehicle Design

    NASA Technical Reports Server (NTRS)

    Braun, Robert D.; Kroo, Ilan M.; Gage, Peter J.

    1993-01-01

    This analysis pertains to the applicability of optimal sensitivity information to aerospace vehicle design. An optimal sensitivity (or post-optimality) analysis refers to computations performed once the initial optimization problem is solved. These computations may be used to characterize the design space about the present solution and infer changes in this solution as a result of constraint or parameter variations, without reoptimizing the entire system. The present analysis demonstrates that post-optimality information generated through first-order computations can be used to accurately predict the effect of constraint and parameter perturbations on the optimal solution. This assessment is based on the solution of an aircraft design problem in which the post-optimality estimates are shown to be within a few percent of the true solution over the practical range of constraint and parameter variations. Through solution of a reusable, single-stage-to-orbit, launch vehicle design problem, this optimal sensitivity information is also shown to improve the efficiency of the design process, For a hierarchically decomposed problem, this computational efficiency is realized by estimating the main-problem objective gradient through optimal sep&ivity calculations, By reducing the need for finite differentiation of a re-optimized subproblem, a significant decrease in the number of objective function evaluations required to reach the optimal solution is obtained.

  12. A Bayesian Network Based Global Sensitivity Analysis Method for Identifying Dominant Processes in a Multi-physics Model

    NASA Astrophysics Data System (ADS)

    Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.

    2016-12-01

    Sensitivity analysis has been an important tool in groundwater modeling to identify the influential parameters. Among various sensitivity analysis methods, the variance-based global sensitivity analysis has gained popularity for its model independence characteristic and capability of providing accurate sensitivity measurements. However, the conventional variance-based method only considers uncertainty contribution of single model parameters. In this research, we extended the variance-based method to consider more uncertainty sources and developed a new framework to allow flexible combinations of different uncertainty components. We decompose the uncertainty sources into a hierarchical three-layer structure: scenario, model and parametric. Furthermore, each layer of uncertainty source is capable of containing multiple components. An uncertainty and sensitivity analysis framework was then constructed following this three-layer structure using Bayesian network. Different uncertainty components are represented as uncertain nodes in this network. Through the framework, variance-based sensitivity analysis can be implemented with great flexibility of using different grouping strategies for uncertainty components. The variance-based sensitivity analysis thus is improved to be able to investigate the importance of an extended range of uncertainty sources: scenario, model, and other different combinations of uncertainty components which can represent certain key model system processes (e.g., groundwater recharge process, flow reactive transport process). For test and demonstration purposes, the developed methodology was implemented into a test case of real-world groundwater reactive transport modeling with various uncertainty sources. The results demonstrate that the new sensitivity analysis method is able to estimate accurate importance measurements for any uncertainty sources which were formed by different combinations of uncertainty components. The new methodology can provide useful information for environmental management and decision-makers to formulate policies and strategies.

  13. Economic Analysis of a Multi-Site Prevention Program: Assessment of Program Costs and Characterizing Site-level Variability

    PubMed Central

    Corso, Phaedra S.; Ingels, Justin B.; Kogan, Steven M.; Foster, E. Michael; Chen, Yi-Fu; Brody, Gene H.

    2013-01-01

    Programmatic cost analyses of preventive interventions commonly have a number of methodological difficulties. To determine the mean total costs and properly characterize variability, one often has to deal with small sample sizes, skewed distributions, and especially missing data. Standard approaches for dealing with missing data such as multiple imputation may suffer from a small sample size, a lack of appropriate covariates, or too few details around the method used to handle the missing data. In this study, we estimate total programmatic costs for a prevention trial evaluating the Strong African American Families-Teen program. This intervention focuses on the prevention of substance abuse and risky sexual behavior. To account for missing data in the assessment of programmatic costs we compare multiple imputation to probabilistic sensitivity analysis. The latter approach uses collected cost data to create a distribution around each input parameter. We found that with the multiple imputation approach, the mean (95% confidence interval) incremental difference was $2149 ($397, $3901). With the probabilistic sensitivity analysis approach, the incremental difference was $2583 ($778, $4346). Although the true cost of the program is unknown, probabilistic sensitivity analysis may be a more viable alternative for capturing variability in estimates of programmatic costs when dealing with missing data, particularly with small sample sizes and the lack of strong predictor variables. Further, the larger standard errors produced by the probabilistic sensitivity analysis method may signal its ability to capture more of the variability in the data, thus better informing policymakers on the potentially true cost of the intervention. PMID:23299559

  14. Economic analysis of a multi-site prevention program: assessment of program costs and characterizing site-level variability.

    PubMed

    Corso, Phaedra S; Ingels, Justin B; Kogan, Steven M; Foster, E Michael; Chen, Yi-Fu; Brody, Gene H

    2013-10-01

    Programmatic cost analyses of preventive interventions commonly have a number of methodological difficulties. To determine the mean total costs and properly characterize variability, one often has to deal with small sample sizes, skewed distributions, and especially missing data. Standard approaches for dealing with missing data such as multiple imputation may suffer from a small sample size, a lack of appropriate covariates, or too few details around the method used to handle the missing data. In this study, we estimate total programmatic costs for a prevention trial evaluating the Strong African American Families-Teen program. This intervention focuses on the prevention of substance abuse and risky sexual behavior. To account for missing data in the assessment of programmatic costs we compare multiple imputation to probabilistic sensitivity analysis. The latter approach uses collected cost data to create a distribution around each input parameter. We found that with the multiple imputation approach, the mean (95 % confidence interval) incremental difference was $2,149 ($397, $3,901). With the probabilistic sensitivity analysis approach, the incremental difference was $2,583 ($778, $4,346). Although the true cost of the program is unknown, probabilistic sensitivity analysis may be a more viable alternative for capturing variability in estimates of programmatic costs when dealing with missing data, particularly with small sample sizes and the lack of strong predictor variables. Further, the larger standard errors produced by the probabilistic sensitivity analysis method may signal its ability to capture more of the variability in the data, thus better informing policymakers on the potentially true cost of the intervention.

  15. A Predictive Model to Estimate Cost Savings of a Novel Diagnostic Blood Panel for Diagnosis of Diarrhea-predominant Irritable Bowel Syndrome.

    PubMed

    Pimentel, Mark; Purdy, Chris; Magar, Raf; Rezaie, Ali

    2016-07-01

    A high incidence of irritable bowel syndrome (IBS) is associated with significant medical costs. Diarrhea-predominant IBS (IBS-D) is diagnosed on the basis of clinical presentation and diagnostic test results and procedures that exclude other conditions. This study was conducted to estimate the potential cost savings of a novel IBS diagnostic blood panel that tests for the presence of antibodies to cytolethal distending toxin B and anti-vinculin associated with IBS-D. A cost-minimization (CM) decision tree model was used to compare the costs of a novel IBS diagnostic blood panel pathway versus an exclusionary diagnostic pathway (ie, standard of care). The probability that patients proceed to treatment was modeled as a function of sensitivity, specificity, and likelihood ratios of the individual biomarker tests. One-way sensitivity analyses were performed for key variables, and a break-even analysis was performed for the pretest probability of IBS-D. Budget impact analysis of the CM model was extrapolated to a health plan with 1 million covered lives. The CM model (base-case) predicted $509 cost savings for the novel IBS diagnostic blood panel versus the exclusionary diagnostic pathway because of the avoidance of downstream testing (eg, colonoscopy, computed tomography scans). Sensitivity analysis indicated that an increase in both positive likelihood ratios modestly increased cost savings. Break-even analysis estimated that the pretest probability of disease would be 0.451 to attain cost neutrality. The budget impact analysis predicted a cost savings of $3,634,006 ($0.30 per member per month). The novel IBS diagnostic blood panel may yield significant cost savings by allowing patients to proceed to treatment earlier, thereby avoiding unnecessary testing. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  16. Economic Evaluation of First-Line Treatments for Metastatic Renal Cell Carcinoma: A Cost-Effectiveness Analysis in A Health Resource–Limited Setting

    PubMed Central

    Wu, Bin; Dong, Baijun; Xu, Yuejuan; Zhang, Qiang; Shen, Jinfang; Chen, Huafeng; Xue, Wei

    2012-01-01

    Background To estimate, from the perspective of the Chinese healthcare system, the economic outcomes of five different first-line strategies among patients with metastatic renal cell carcinoma (mRCC). Methods and Findings A decision-analytic model was developed to simulate the lifetime disease course associated with renal cell carcinoma. The health and economic outcomes of five first-line strategies (interferon-alfa, interleukin-2, interleukin-2 plus interferon-alfa, sunitinib and bevacizumab plus interferon-alfa) were estimated and assessed by indirect comparison. The clinical and utility data were taken from published studies. The cost data were estimated from local charge data and current Chinese practices. Sensitivity analyses were used to explore the impact of uncertainty regarding the results. The impact of the sunitinib patient assistant program (SPAP) was evaluated via scenario analysis. The base-case analysis showed that the sunitinib strategy yielded the maximum health benefits: 2.71 life years and 1.40 quality-adjusted life-years (QALY). The marginal cost-effectiveness (cost per additional QALY) gained via the sunitinib strategy compared with the conventional strategy was $220,384 (without SPAP, interleukin-2 plus interferon-alfa and bevacizumab plus interferon-alfa were dominated) and $16,993 (with SPAP, interferon-alfa, interleukin-2 plus interferon-alfa and bevacizumab plus interferon-alfa were dominated). In general, the results were sensitive to the hazard ratio of progression-free survival. The probabilistic sensitivity analysis demonstrated that the sunitinib strategy with SPAP was the most cost-effective approach when the willingness-to-pay threshold was over $16,000. Conclusions Our analysis suggests that traditional cytokine therapy is the cost-effective option in the Chinese healthcare setting. In some relatively developed regions, sunitinib with SPAP may be a favorable cost-effective alternative for mRCC. PMID:22412884

  17. Economic evaluation of first-line treatments for metastatic renal cell carcinoma: a cost-effectiveness analysis in a health resource-limited setting.

    PubMed

    Wu, Bin; Dong, Baijun; Xu, Yuejuan; Zhang, Qiang; Shen, Jinfang; Chen, Huafeng; Xue, Wei

    2012-01-01

    To estimate, from the perspective of the Chinese healthcare system, the economic outcomes of five different first-line strategies among patients with metastatic renal cell carcinoma (mRCC). A decision-analytic model was developed to simulate the lifetime disease course associated with renal cell carcinoma. The health and economic outcomes of five first-line strategies (interferon-alfa, interleukin-2, interleukin-2 plus interferon-alfa, sunitinib and bevacizumab plus interferon-alfa) were estimated and assessed by indirect comparison. The clinical and utility data were taken from published studies. The cost data were estimated from local charge data and current Chinese practices. Sensitivity analyses were used to explore the impact of uncertainty regarding the results. The impact of the sunitinib patient assistant program (SPAP) was evaluated via scenario analysis. The base-case analysis showed that the sunitinib strategy yielded the maximum health benefits: 2.71 life years and 1.40 quality-adjusted life-years (QALY). The marginal cost-effectiveness (cost per additional QALY) gained via the sunitinib strategy compared with the conventional strategy was $220,384 (without SPAP, interleukin-2 plus interferon-alfa and bevacizumab plus interferon-alfa were dominated) and $16,993 (with SPAP, interferon-alfa, interleukin-2 plus interferon-alfa and bevacizumab plus interferon-alfa were dominated). In general, the results were sensitive to the hazard ratio of progression-free survival. The probabilistic sensitivity analysis demonstrated that the sunitinib strategy with SPAP was the most cost-effective approach when the willingness-to-pay threshold was over $16,000. Our analysis suggests that traditional cytokine therapy is the cost-effective option in the Chinese healthcare setting. In some relatively developed regions, sunitinib with SPAP may be a favorable cost-effective alternative for mRCC.

  18. Investigating the Water Vapor Component of the Greenhouse Effect from the Atmospheric InfraRed Sounder (AIRS)

    NASA Astrophysics Data System (ADS)

    Gambacorta, A.; Barnet, C.; Sun, F.; Goldberg, M.

    2009-12-01

    We investigate the water vapor component of the greenhouse effect in the tropical region using data from the Atmospheric InfraRed Sounder (AIRS). Differently from previous studies who have relayed on the assumption of constant lapse rate and performed coarse layer or total column sensitivity analysis, we resort to AIRS high vertical resolution to measure the greenhouse effect sensitivity to water vapor along the vertical column. We employ a "partial radiative perturbation" methodology and discriminate between two different dynamic regimes, convective and non-convective. This analysis provides useful insights on the occurrence and strength of the water vapor greenhouse effect and its sensitivity to spatial variations of surface temperature. By comparison with the clear-sky computation conducted in previous works, we attempt to confine an estimate for the cloud contribution to the greenhouse effect. Our results compare well with the current literature, falling in the upper range of the existing global circulation model estimates. We value the results of this analysis as a useful reference to help discriminate among model simulations and improve our capability to make predictions about the future of our climate.

  19. Genetic Variation and Combining Ability Analysis of Bruising Sensitivity in Agaricus bisporus

    PubMed Central

    Gao, Wei; Baars, Johan J. P.; Dolstra, Oene; Visser, Richard G. F.; Sonnenberg, Anton S. M.

    2013-01-01

    Advanced button mushroom cultivars that are less sensitive to mechanical bruising are required by the mushroom industry, where automated harvesting still cannot be used for the fresh mushroom market. The genetic variation in bruising sensitivity (BS) of Agaricus bisporus was studied through an incomplete set of diallel crosses to get insight in the heritability of BS and the combining ability of the parental lines used and, in this way, to estimate their breeding value. To this end nineteen homokaryotic lines recovered from wild strains and cultivars were inter-crossed in a diallel scheme. Fifty-one successful hybrids were grown under controlled conditions, and the BS of these hybrids was assessed. BS was shown to be a trait with a very high heritability. The results also showed that brown hybrids were generally less sensitive to bruising than white hybrids. The diallel scheme allowed to estimate the general combining ability (GCA) for each homokaryotic parental line and to estimate the specific combining ability (SCA) of each hybrid. The line with the lowest GCA is seen as the most attractive donor for improving resistance to bruising. The line gave rise to hybrids sensitive to bruising having the highest GCA value. The highest negative SCA possibly indicates heterosis effects for resistance to bruising. This study provides a foundation for estimating breeding value of parental lines to further study the genetic factors underlying bruising sensitivity and other quality-related traits, and to select potential parental lines for further heterosis breeding. The approach of studying combining ability in a diallel scheme was used for the first time in button mushroom breeding. PMID:24116171

  20. Addressing issues associated with evaluating prediction models for survival endpoints based on the concordance statistic.

    PubMed

    Wang, Ming; Long, Qi

    2016-09-01

    Prediction models for disease risk and prognosis play an important role in biomedical research, and evaluating their predictive accuracy in the presence of censored data is of substantial interest. The standard concordance (c) statistic has been extended to provide a summary measure of predictive accuracy for survival models. Motivated by a prostate cancer study, we address several issues associated with evaluating survival prediction models based on c-statistic with a focus on estimators using the technique of inverse probability of censoring weighting (IPCW). Compared to the existing work, we provide complete results on the asymptotic properties of the IPCW estimators under the assumption of coarsening at random (CAR), and propose a sensitivity analysis under the mechanism of noncoarsening at random (NCAR). In addition, we extend the IPCW approach as well as the sensitivity analysis to high-dimensional settings. The predictive accuracy of prediction models for cancer recurrence after prostatectomy is assessed by applying the proposed approaches. We find that the estimated predictive accuracy for the models in consideration is sensitive to NCAR assumption, and thus identify the best predictive model. Finally, we further evaluate the performance of the proposed methods in both settings of low-dimensional and high-dimensional data under CAR and NCAR through simulations. © 2016, The International Biometric Society.

  1. Using TRMM Data To Understand Interannual Variations In the Tropical Water Balance

    NASA Technical Reports Server (NTRS)

    Robertson, Franklin R.; Fitzjarrald, Dan; Arnold, James E. (Technical Monitor)

    2002-01-01

    A significant element of the science rationale for TRMM centered on assembling rainfall data needed to validate climate models-- climatological estimates of precipitation, its spatial and temporal variability, and vertical modes of latent heat release. Since the launch of TRMM, a great interest in the science community has emerged for quantifying interannual variability (IAV) of precipitation and its relationship to sea-surface temperature (SST) changes. The fact that TRMM has sampled one strong warm/ cold ENSO couplet, together with the prospect for a mission lifetime approaching ten years, has bolstered this interest in these longer time scales. Variability on a regional basis as well as for the tropics as a whole is of concern. Our analysis of TRMM results so far has shown surprising lack of concordance between various algorithms in quantifying IAV of precipitation. The first objective of this talk is to quantify the sensitivity of tropical precipitation to changes in SSTs. We analyze performance of the 3A11, 3A25, and 3B31 algorithms and investigate their relationship to scattering-- based algorithms constructed from SSM/I and TRMM 85 kHz data. The physical basis for the differences (and similarities) in depicting tropical oceanic and land rainfall will be discussed. We argue that scattering-based estimates of variability constitute a useful upper bound for precipitation variations. These results lead to the second question addressed in this talk-- How do TRMM precipitation / SST sensitivities compare to estimates of oceanic evaporation and what are the implications of these uncertainties in determining interannual changes in large-scale moisture transport? We summarize results of an analysis performed using COADS data supplemented by SSM/I estimates of near-surface variables to assess evaporation sensitivity to SST. The response of near 5 W sq m/K is compared to various TRMM precipitation sensitivities. Implied moisture convergence over the tropics and its sensitivity to errors of these algorithms is discussed.

  2. Diagnostic performance of HbA1c for diabetes in Arab vs. European populations: a systematic review and meta-analysis.

    PubMed

    Bertran, E A; Berlie, H D; Taylor, A; Divine, G; Jaber, L A

    2017-02-01

    To examine differences in the performance of HbA 1c for diagnosing diabetes in Arabs compared with Europeans. The PubMed, Embase and Cochrane library databases were searched for records published between 1998 and 2015. Estimates of sensitivity, specificity and log diagnostic odds ratios for an HbA 1c cut-point of 48 mmol/mol (6.5%) were compared between Arabs and Europeans, using a bivariate linear mixed-model approach. For studies reporting multiple cut-points, population-specific summary receiver operating characteristic (SROC) curves were constructed. In addition, sensitivity, specificity and Youden Index were estimated for strata defined by HbA 1c cut-point and population type. Database searches yielded 1912 unique records; 618 full-text articles were reviewed. Fourteen studies met the inclusion criteria; hand-searching yielded three additional eligible studies. Three Arab (N = 2880) and 16 European populations (N = 49 127) were included in the analysis. Summary sensitivity and specificity for a HbA 1c cut-point of 48 mmol/mol (6.5%) in both populations were 42% (33-51%), and 97% (95-98%). There was no difference in area under SROC curves between Arab and European populations (0.844 vs. 0.847; P = 0.867), suggesting no difference in HbA 1c diagnostic accuracy between populations. Multiple cut-point summary estimates stratified by population suggest that Arabs have lower sensitivity and higher specificity at a HbA 1c cut-point of 44 mmol/mol (6.2%) compared with European populations. Estimates also suggest similar test performance at cut-points of 44 mmol/mol (6.2%) and 48 mmol/mol (6.5%) for Arabs. Given the low sensitivity of HbA 1c in the high-risk Arab American population, we recommend a combination of glucose-based and HbA 1c testing to ensure an accurate and timely diagnosis of diabetes. © 2016 Diabetes UK.

  3. Flows of dioxins and furans in coastal food webs: inverse modeling, sensitivity analysis, and applications of linear system theory.

    PubMed

    Saloranta, Tuomo M; Andersen, Tom; Naes, Kristoffer

    2006-01-01

    Rate constant bioaccumulation models are applied to simulate the flow of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) in the coastal marine food web of Frierfjorden, a contaminated fjord in southern Norway. We apply two different ways to parameterize the rate constants in the model, global sensitivity analysis of the models using Extended Fourier Amplitude Sensitivity Test (Extended FAST) method, as well as results from general linear system theory, in order to obtain a more thorough insight to the system's behavior and to the flow pathways of the PCDD/Fs. We calibrate our models against observed body concentrations of PCDD/Fs in the food web of Frierfjorden. Differences between the predictions from the two models (using the same forcing and parameter values) are of the same magnitude as their individual deviations from observations, and the models can be said to perform about equally well in our case. Sensitivity analysis indicates that the success or failure of the models in predicting the PCDD/F concentrations in the food web organisms highly depends on the adequate estimation of the truly dissolved concentrations in water and sediment pore water. We discuss the pros and cons of such models in understanding and estimating the present and future concentrations and bioaccumulation of persistent organic pollutants in aquatic food webs.

  4. A methodology to estimate uncertainty for emission projections through sensitivity analysis.

    PubMed

    Lumbreras, Julio; de Andrés, Juan Manuel; Pérez, Javier; Borge, Rafael; de la Paz, David; Rodríguez, María Encarnación

    2015-04-01

    Air pollution abatement policies must be based on quantitative information on current and future emissions of pollutants. As emission projections uncertainties are inevitable and traditional statistical treatments of uncertainty are highly time/resources consuming, a simplified methodology for nonstatistical uncertainty estimation based on sensitivity analysis is presented in this work. The methodology was applied to the "with measures" scenario for Spain, concretely over the 12 highest emitting sectors regarding greenhouse gas and air pollutants emissions. Examples of methodology application for two important sectors (power plants, and agriculture and livestock) are shown and explained in depth. Uncertainty bands were obtained up to 2020 by modifying the driving factors of the 12 selected sectors and the methodology was tested against a recomputed emission trend in a low economic-growth perspective and official figures for 2010, showing a very good performance. A solid understanding and quantification of uncertainties related to atmospheric emission inventories and projections provide useful information for policy negotiations. However, as many of those uncertainties are irreducible, there is an interest on how they could be managed in order to derive robust policy conclusions. Taking this into account, a method developed to use sensitivity analysis as a source of information to derive nonstatistical uncertainty bands for emission projections is presented and applied to Spain. This method simplifies uncertainty assessment and allows other countries to take advantage of their sensitivity analyses.

  5. Analysis of sensitivity of simulated recharge to selected parameters for seven watersheds modeled using the precipitation-runoff modeling system

    USGS Publications Warehouse

    Ely, D. Matthew

    2006-01-01

    Recharge is a vital component of the ground-water budget and methods for estimating it range from extremely complex to relatively simple. The most commonly used techniques, however, are limited by the scale of application. One method that can be used to estimate ground-water recharge includes process-based models that compute distributed water budgets on a watershed scale. These models should be evaluated to determine which model parameters are the dominant controls in determining ground-water recharge. Seven existing watershed models from different humid regions of the United States were chosen to analyze the sensitivity of simulated recharge to model parameters. Parameter sensitivities were determined using a nonlinear regression computer program to generate a suite of diagnostic statistics. The statistics identify model parameters that have the greatest effect on simulated ground-water recharge and that compare and contrast the hydrologic system responses to those parameters. Simulated recharge in the Lost River and Big Creek watersheds in Washington State was sensitive to small changes in air temperature. The Hamden watershed model in west-central Minnesota was developed to investigate the relations that wetlands and other landscape features have with runoff processes. Excess soil moisture in the Hamden watershed simulation was preferentially routed to wetlands, instead of to the ground-water system, resulting in little sensitivity of any parameters to recharge. Simulated recharge in the North Fork Pheasant Branch watershed, Wisconsin, demonstrated the greatest sensitivity to parameters related to evapotranspiration. Three watersheds were simulated as part of the Model Parameter Estimation Experiment (MOPEX). Parameter sensitivities for the MOPEX watersheds, Amite River, Louisiana and Mississippi, English River, Iowa, and South Branch Potomac River, West Virginia, were similar and most sensitive to small changes in air temperature and a user-defined flow routing parameter. Although the primary objective of this study was to identify, by geographic region, the importance of the parameter value to the simulation of ground-water recharge, the secondary objectives proved valuable for future modeling efforts. The value of a rigorous sensitivity analysis can (1) make the calibration process more efficient, (2) guide additional data collection, (3) identify model limitations, and (4) explain simulated results.

  6. Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Mai, J.; Tolson, B.

    2017-12-01

    The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method's independency of the convergence testing method, we applied it to two widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991) and the variance-based Sobol' method (Solbol' 1993). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an efficient way. The appealing feature of this new technique is the necessity of no further model evaluation and therefore enables checking of already processed sensitivity results. This is one step towards reliable and transferable, published sensitivity results.

  7. Importance analysis for Hudson River PCB transport and fate model parameters using robust sensitivity studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, S.; Toll, J.; Cothern, K.

    1995-12-31

    The authors have performed robust sensitivity studies of the physico-chemical Hudson River PCB model PCHEPM to identify the parameters and process uncertainties contributing the most to uncertainty in predictions of water column and sediment PCB concentrations, over the time period 1977--1991 in one segment of the lower Hudson River. The term ``robust sensitivity studies`` refers to the use of several sensitivity analysis techniques to obtain a more accurate depiction of the relative importance of different sources of uncertainty. Local sensitivity analysis provided data on the sensitivity of PCB concentration estimates to small perturbations in nominal parameter values. Range sensitivity analysismore » provided information about the magnitude of prediction uncertainty associated with each input uncertainty. Rank correlation analysis indicated which parameters had the most dominant influence on model predictions. Factorial analysis identified important interactions among model parameters. Finally, term analysis looked at the aggregate influence of combinations of parameters representing physico-chemical processes. The authors scored the results of the local and range sensitivity and rank correlation analyses. The authors considered parameters that scored high on two of the three analyses to be important contributors to PCB concentration prediction uncertainty, and treated them probabilistically in simulations. They also treated probabilistically parameters identified in the factorial analysis as interacting with important parameters. The authors used the term analysis to better understand how uncertain parameters were influencing the PCB concentration predictions. The importance analysis allowed us to reduce the number of parameters to be modeled probabilistically from 16 to 5. This reduced the computational complexity of Monte Carlo simulations, and more importantly, provided a more lucid depiction of prediction uncertainty and its causes.« less

  8. The choice of prior distribution for a covariance matrix in multivariate meta-analysis: a simulation study.

    PubMed

    Hurtado Rúa, Sandra M; Mazumdar, Madhu; Strawderman, Robert L

    2015-12-30

    Bayesian meta-analysis is an increasingly important component of clinical research, with multivariate meta-analysis a promising tool for studies with multiple endpoints. Model assumptions, including the choice of priors, are crucial aspects of multivariate Bayesian meta-analysis (MBMA) models. In a given model, two different prior distributions can lead to different inferences about a particular parameter. A simulation study was performed in which the impact of families of prior distributions for the covariance matrix of a multivariate normal random effects MBMA model was analyzed. Inferences about effect sizes were not particularly sensitive to prior choice, but the related covariance estimates were. A few families of prior distributions with small relative biases, tight mean squared errors, and close to nominal coverage for the effect size estimates were identified. Our results demonstrate the need for sensitivity analysis and suggest some guidelines for choosing prior distributions in this class of problems. The MBMA models proposed here are illustrated in a small meta-analysis example from the periodontal field and a medium meta-analysis from the study of stroke. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  9. Accelerated Monte Carlo Simulation for Safety Analysis of the Advanced Airspace Concept

    NASA Technical Reports Server (NTRS)

    Thipphavong, David

    2010-01-01

    Safe separation of aircraft is a primary objective of any air traffic control system. An accelerated Monte Carlo approach was developed to assess the level of safety provided by a proposed next-generation air traffic control system. It combines features of fault tree and standard Monte Carlo methods. It runs more than one order of magnitude faster than the standard Monte Carlo method while providing risk estimates that only differ by about 10%. It also preserves component-level model fidelity that is difficult to maintain using the standard fault tree method. This balance of speed and fidelity allows sensitivity analysis to be completed in days instead of weeks or months with the standard Monte Carlo method. Results indicate that risk estimates are sensitive to transponder, pilot visual avoidance, and conflict detection failure probabilities.

  10. A comparison of solute-transport solution techniques and their effect on sensitivity analysis and inverse modeling results

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2001-01-01

    Five common numerical techniques for solving the advection-dispersion equation (finite difference, predictor corrector, total variation diminishing, method of characteristics, and modified method of characteristics) were tested using simulations of a controlled conservative tracer-test experiment through a heterogeneous, two-dimensional sand tank. The experimental facility was constructed using discrete, randomly distributed, homogeneous blocks of five sand types. This experimental model provides an opportunity to compare the solution techniques: the heterogeneous hydraulic-conductivity distribution of known structure can be accurately represented by a numerical model, and detailed measurements can be compared with simulated concentrations and total flow through the tank. The present work uses this opportunity to investigate how three common types of results - simulated breakthrough curves, sensitivity analysis, and calibrated parameter values - change in this heterogeneous situation given the different methods of simulating solute transport. The breakthrough curves show that simulated peak concentrations, even at very fine grid spacings, varied between the techniques because of different amounts of numerical dispersion. Sensitivity-analysis results revealed: (1) a high correlation between hydraulic conductivity and porosity given the concentration and flow observations used, so that both could not be estimated; and (2) that the breakthrough curve data did not provide enough information to estimate individual values of dispersivity for the five sands. This study demonstrates that the choice of assigned dispersivity and the amount of numerical dispersion present in the solution technique influence estimated hydraulic conductivity values to a surprising degree.

  11. Practical limits for reverse engineering of dynamical systems: a statistical analysis of sensitivity and parameter inferability in systems biology models.

    PubMed

    Erguler, Kamil; Stumpf, Michael P H

    2011-05-01

    The size and complexity of cellular systems make building predictive models an extremely difficult task. In principle dynamical time-course data can be used to elucidate the structure of the underlying molecular mechanisms, but a central and recurring problem is that many and very different models can be fitted to experimental data, especially when the latter are limited and subject to noise. Even given a model, estimating its parameters remains challenging in real-world systems. Here we present a comprehensive analysis of 180 systems biology models, which allows us to classify the parameters with respect to their contribution to the overall dynamical behaviour of the different systems. Our results reveal candidate elements of control in biochemical pathways that differentially contribute to dynamics. We introduce sensitivity profiles that concisely characterize parameter sensitivity and demonstrate how this can be connected to variability in data. Systematically linking data and model sloppiness allows us to extract features of dynamical systems that determine how well parameters can be estimated from time-course measurements, and associates the extent of data required for parameter inference with the model structure, and also with the global dynamical state of the system. The comprehensive analysis of so many systems biology models reaffirms the inability to estimate precisely most model or kinetic parameters as a generic feature of dynamical systems, and provides safe guidelines for performing better inferences and model predictions in the context of reverse engineering of mathematical models for biological systems.

  12. Quantitative estimates of the impact of sensitivity and specificity in mammographic screening in Germany.

    PubMed Central

    Warmerdam, P G; de Koning, H J; Boer, R; Beemsterboer, P M; Dierks, M L; Swart, E; Robra, B P

    1997-01-01

    STUDY OBJECTIVE: To estimate quantitatively the impact of the quality of mammographic screening (in terms of sensitivity and specificity) on the effects and costs of nationwide breast cancer screening. DESIGN: Three plausible "quality" scenarios for a biennial breast cancer screening programme for women aged 50-69 in Germany were analysed in terms of costs and effects using the Microsimulation Screening Analysis model on breast cancer screening and the natural history of breast cancer. Firstly, sensitivity and specificity in the expected situation (or "baseline" scenario) were estimated from a model based analysis of empirical data from 35,000 screening examinations in two German pilot projects. In the second "high quality" scenario, these properties were based on the more favourable diagnostic results from breast cancer screening projects and the nationwide programme in The Netherlands. Thirdly, a worst case, "low quality" hypothetical scenario with a 25% lower sensitivity than that experienced in The Netherlands was analysed. SETTING: The epidemiological and social situation in Germany in relation to mass screening for breast cancer. RESULTS: In the "baseline" scenario, an 11% reduction in breast cancer mortality was expected in the total German female population, ie 2100 breast cancer deaths would be prevented per year. It was estimated that the "high quality" scenario, based on Dutch experience, would lead to the prevention of an additional 200 deaths per year and would also cut the number of false positive biopsy results by half. The cost per life year gained varied from Deutsche mark (DM) 15,000 on the "high quality" scenario to DM 21,000 in the "low quality" setting. CONCLUSIONS: Up to 20% of the total costs of a screening programme can be spent on quality improvement in order to achieve a substantially higher reduction in mortality and reduce undesirable side effects while retaining the same cost effectiveness ratio as that estimated from the German data. PMID:9196649

  13. Rules of Thumb for Depth of Investigation, Pseudo-Position and Resolution of the Electrical Resistivity Method from Analysis of the Moments of the Sensitivity Function for a Homogeneous Half-Space

    NASA Astrophysics Data System (ADS)

    Butler, S. L.

    2017-12-01

    The electrical resistivity method is now highly developed with 2D and even 3D surveys routinely performed and with available fast inversion software. However, rules of thumb, based on simple mathematical formulas, for important quantities like depth of investigation, horizontal position and resolution have not previously been available and would be useful for survey planning, preliminary interpretation and general education about the method. In this contribution, I will show that the sensitivity function for the resistivity method for a homogeneous half-space can be analyzed in terms of its first and second moments which yield simple mathematical formulas. The first moment gives the sensitivity-weighted center of an apparent resistivity measurement with the vertical center being an estimate of the depth of investigation. I will show that this depth of investigation estimate works at least as well as previous estimates based on the peak and median of the depth sensitivity function which must be calculated numerically for a general four electrode array. The vertical and horizontal first moments can also be used as pseudopositions when plotting 1, 2 and 3D pseudosections. The appropriate horizontal plotting point for a pseudosection was not previously obvious for nonsymmetric arrays. The second moments of the sensitivity function give estimates of the spatial extent of the region contributing to an apparent resistivity measurement and hence are measures of the resolution. These also have simple mathematical formulas.

  14. Prevalence and trends of infection with Mycobacterium tuberculosis in Djibouti, testing an alternative method.

    PubMed

    Trébucq, A; Guérin, N; Ali Ismael, H; Bernatas, J J; Sèvre, J P; Rieder, H L

    2005-10-01

    Djibouti, 1994 and 2001. To estimate the prevalence of tuberculosis (TB) and average annual risk of TB infection (ARTI) and trends, and to test a new method for calculations. Tuberculin surveys among schoolchildren and sputum smear-positive TB patients. Prevalence of infection was calculated using cut-off points, the mirror image technique, mixture analysis, and a new method based on the operating characteristics of the tuberculin test. Test sensitivity was derived from tuberculin reactions among TB patients and test specificity from a comparison of reaction size distributions among children with and without a BCG scar. The ARTI was estimated to lie between 2.6% and 3.1%, with no significant changes between 1994 and 2001. The close match of the distributions between children tested in 1994 and patients justifies the utilisation of the latter to determine test sensitivity. This new method gave very consistent estimates of prevalence of infection for any induration for values between 15 and 20 mm. Specificity was successfully determined for 1994, but not for 2001. Mixture analysis confirmed the estimates obtained with the new method. Djibouti has a high ARTI, and no apparent change over the observation time was found. Using operating test characteristics to estimate prevalence of infection looks promising.

  15. LLNA variability: An essential ingredient for a comprehensive assessment of non-animal skin sensitization test methods and strategies.

    PubMed

    Hoffmann, Sebastian

    2015-01-01

    The development of non-animal skin sensitization test methods and strategies is quickly progressing. Either individually or in combination, the predictive capacity is usually described in comparison to local lymph node assay (LLNA) results. In this process the important lesson from other endpoints, such as skin or eye irritation, to account for variability reference test results - here the LLNA - has not yet been fully acknowledged. In order to provide assessors as well as method and strategy developers with appropriate estimates, we investigated the variability of EC3 values from repeated substance testing using the publicly available NICEATM (NTP Interagency Center for the Evaluation of Alternative Toxicological Methods) LLNA database. Repeat experiments for more than 60 substances were analyzed - once taking the vehicle into account and once combining data over all vehicles. In general, variability was higher when different vehicles were used. In terms of skin sensitization potential, i.e., discriminating sensitizer from non-sensitizers, the false positive rate ranged from 14-20%, while the false negative rate was 4-5%. In terms of skin sensitization potency, the rate to assign a substance to the next higher or next lower potency class was approx.10-15%. In addition, general estimates for EC3 variability are provided that can be used for modelling purposes. With our analysis we stress the importance of considering the LLNA variability in the assessment of skin sensitization test methods and strategies and provide estimates thereof.

  16. Connectome sensitivity or specificity: which is more important?

    PubMed

    Zalesky, Andrew; Fornito, Alex; Cocchi, Luca; Gollo, Leonardo L; van den Heuvel, Martijn P; Breakspear, Michael

    2016-11-15

    Connectomes with high sensitivity and high specificity are unattainable with current axonal fiber reconstruction methods, particularly at the macro-scale afforded by magnetic resonance imaging. Tensor-guided deterministic tractography yields sparse connectomes that are incomplete and contain false negatives (FNs), whereas probabilistic methods steered by crossing-fiber models yield dense connectomes, often with low specificity due to false positives (FPs). Densely reconstructed probabilistic connectomes are typically thresholded to improve specificity at the cost of a reduction in sensitivity. What is the optimal tradeoff between connectome sensitivity and specificity? We show empirically and theoretically that specificity is paramount. Our evaluations of the impact of FPs and FNs on empirical connectomes indicate that specificity is at least twice as important as sensitivity when estimating key properties of brain networks, including topological measures of network clustering, network efficiency and network modularity. Our asymptotic analysis of small-world networks with idealized modular structure reveals that as the number of nodes grows, specificity becomes exactly twice as important as sensitivity to the estimation of the clustering coefficient. For the estimation of network efficiency, the relative importance of specificity grows linearly with the number of nodes. The greater importance of specificity is due to FPs occurring more prevalently between network modules rather than within them. These spurious inter-modular connections have a dramatic impact on network topology. We argue that efforts to maximize the sensitivity of connectome reconstruction should be realigned with the need to map brain networks with high specificity. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Radiolysis Model Sensitivity Analysis for a Used Fuel Storage Canister

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wittman, Richard S.

    2013-09-20

    This report fulfills the M3 milestone (M3FT-13PN0810027) to report on a radiolysis computer model analysis that estimates the generation of radiolytic products for a storage canister. The analysis considers radiolysis outside storage canister walls and within the canister fill gas over a possible 300-year lifetime. Previous work relied on estimates based directly on a water radiolysis G-value. This work also includes that effect with the addition of coupled kinetics for 111 reactions for 40 gas species to account for radiolytic-induced chemistry, which includes water recombination and reactions with air.

  18. The cost of a case of subclinical ketosis in Canadian dairy herds

    PubMed Central

    Gohary, Khaled; Overton, Michael W.; Von Massow, Michael; LeBlanc, Stephen J.; Lissemore, Kerry D.; Duffield, Todd F.

    2016-01-01

    The objective of this study was to develop a model to estimate the cost of a case of subclinical ketosis (SCK) in Canadian dairy herds. Costs were derived from the default inputs, and included increased clinical disease incidence attributable to SCK, $76; longer time to pregnancy, $57; culling and death in early lactation attributable to SCK, $26; milk production loss, $44. Given these figures, the cost of 1 case of SCK was estimated to be $203. Sensitivity analysis showed that the estimated cost of a case of SCK was most sensitive to the herd-level incidence of SCK and the cost of 1 day open. In conclusion, SCK negatively impacts dairy herds and losses are dependent on the herd-level incidence and factors included in the calculation. PMID:27429460

  19. The cost of a case of subclinical ketosis in Canadian dairy herds.

    PubMed

    Gohary, Khaled; Overton, Michael W; Von Massow, Michael; LeBlanc, Stephen J; Lissemore, Kerry D; Duffield, Todd F

    2016-07-01

    The objective of this study was to develop a model to estimate the cost of a case of subclinical ketosis (SCK) in Canadian dairy herds. Costs were derived from the default inputs, and included increased clinical disease incidence attributable to SCK, $76; longer time to pregnancy, $57; culling and death in early lactation attributable to SCK, $26; milk production loss, $44. Given these figures, the cost of 1 case of SCK was estimated to be $203. Sensitivity analysis showed that the estimated cost of a case of SCK was most sensitive to the herd-level incidence of SCK and the cost of 1 day open. In conclusion, SCK negatively impacts dairy herds and losses are dependent on the herd-level incidence and factors included in the calculation.

  20. Non-parametric correlative uncertainty quantification and sensitivity analysis: Application to a Langmuir bimolecular adsorption model

    NASA Astrophysics Data System (ADS)

    Feng, Jinchao; Lansford, Joshua; Mironenko, Alexander; Pourkargar, Davood Babaei; Vlachos, Dionisios G.; Katsoulakis, Markos A.

    2018-03-01

    We propose non-parametric methods for both local and global sensitivity analysis of chemical reaction models with correlated parameter dependencies. The developed mathematical and statistical tools are applied to a benchmark Langmuir competitive adsorption model on a close packed platinum surface, whose parameters, estimated from quantum-scale computations, are correlated and are limited in size (small data). The proposed mathematical methodology employs gradient-based methods to compute sensitivity indices. We observe that ranking influential parameters depends critically on whether or not correlations between parameters are taken into account. The impact of uncertainty in the correlation and the necessity of the proposed non-parametric perspective are demonstrated.

  1. Fish oil supplementation and insulin sensitivity: a systematic review and meta-analysis.

    PubMed

    Gao, Huanqing; Geng, Tingting; Huang, Tao; Zhao, Qinghua

    2017-07-03

    Fish oil supplementation has been shown to be associated with a lower risk of metabolic syndrome and benefit a wide range of chronic diseases, such as cardiovascular disease, type 2 diabetes and several types of cancers. However, the evidence of fish oil supplementation on glucose metabolism and insulin sensitivity is still controversial. This meta-analysis summarized the exist evidence of the relationship between fish oil supplementation and insulin sensitivity and aimed to evaluate whether fish oil supplementation could improve insulin sensitivity. We searched the Cochrane Library, PubMed, Embase database for the relevant studies update to Dec 2016. Two researchers screened the literature independently by the selection and exclusion criteria. Studies were pooled using random effect models to estimate a pooled SMD and corresponding 95% CI. This meta-analysis was performed by Stata 13.1 software. A total of 17 studies with 672 participants were included in this meta-analysis study after screening from 498 published articles found after the initial search. In a pooled analysis, fish oil supplementation had no effects on insulin sensitivity compared with the placebo (SMD 0.17, 95%CI -0.15 to 0.48, p = 0.292). In subgroup analysis, fish oil supplementation could benefit insulin sensitivity among people who were experiencing at least one symptom of metabolic disorders (SMD 0.53, 95% CI 0.17 to 0.88, p < 0.001). Similarly, there were no significant differences between subgroups of methods of insulin sensitivity, doses of omega-3 polyunsaturated fatty acids (n-3 PUFA) of fish oil supplementation or duration of the intervention. The sensitivity analysis indicated that the results were robust. Short-term fish oil supplementation is associated with increasing the insulin sensitivity among those people with metabolic disorders.

  2. Estimating the Standard Error of Robust Regression Estimates.

    DTIC Science & Technology

    1987-03-01

    error is 0(n4/5). In another Monte Carlo study, McKean and Schrader (1984) found that the tests resulting from studentizing ; by _3d/1/2 with d =0(n4 /5...44 4 -:~~-~*v: -. *;~ ~ ~*t .~ # ~ 44 % * ~ .%j % % % * . ., ~ -%. -14- Sheather, S. J. and McKean, J. W. (1987). A comparison of testing and...Wiley, New York. Welsch, R. E. (1980). Regression Sensitivity Analysis and Bounded- Influence Estimation, in Evaluation of Econometric Models eds. J

  3. Discrete sensitivity derivatives of the Navier-Stokes equations with a parallel Krylov solver

    NASA Technical Reports Server (NTRS)

    Ajmani, Kumud; Taylor, Arthur C., III

    1994-01-01

    This paper solves an 'incremental' form of the sensitivity equations derived by differentiating the discretized thin-layer Navier Stokes equations with respect to certain design variables of interest. The equations are solved with a parallel, preconditioned Generalized Minimal RESidual (GMRES) solver on a distributed-memory architecture. The 'serial' sensitivity analysis code is parallelized by using the Single Program Multiple Data (SPMD) programming model, domain decomposition techniques, and message-passing tools. Sensitivity derivatives are computed for low and high Reynolds number flows over a NACA 1406 airfoil on a 32-processor Intel Hypercube, and found to be identical to those computed on a single-processor Cray Y-MP. It is estimated that the parallel sensitivity analysis code has to be run on 40-50 processors of the Intel Hypercube in order to match the single-processor processing time of a Cray Y-MP.

  4. Comparative Sensitivity Analysis of Muscle Activation Dynamics

    PubMed Central

    Günther, Michael; Götz, Thomas

    2015-01-01

    We mathematically compared two models of mammalian striated muscle activation dynamics proposed by Hatze and Zajac. Both models are representative for a broad variety of biomechanical models formulated as ordinary differential equations (ODEs). These models incorporate parameters that directly represent known physiological properties. Other parameters have been introduced to reproduce empirical observations. We used sensitivity analysis to investigate the influence of model parameters on the ODE solutions. In addition, we expanded an existing approach to treating initial conditions as parameters and to calculating second-order sensitivities. Furthermore, we used a global sensitivity analysis approach to include finite ranges of parameter values. Hence, a theoretician striving for model reduction could use the method for identifying particularly low sensitivities to detect superfluous parameters. An experimenter could use it for identifying particularly high sensitivities to improve parameter estimation. Hatze's nonlinear model incorporates some parameters to which activation dynamics is clearly more sensitive than to any parameter in Zajac's linear model. Other than Zajac's model, Hatze's model can, however, reproduce measured shifts in optimal muscle length with varied muscle activity. Accordingly we extracted a specific parameter set for Hatze's model that combines best with a particular muscle force-length relation. PMID:26417379

  5. Automatic Target Recognition Classification System Evaluation Methodology

    DTIC Science & Technology

    2002-09-01

    Testing Set of Two-Class XOR Data (250 Samples)......................................... 2-59 2.27 Decision Analysis Process Flow Chart...ROC curve meta - analysis , which is the estimation of the true ROC curve of a given diagnostic system through ROC analysis across many studies or...technique can be very effective in sensitivity analysis ; trying to determine which data points have the most effect on the solution, and in

  6. Sensitivity Analysis and Requirements for Temporally and Spatially Resolved Thermometry Using Neutron Resonance Spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fernandez, Juan Carlos; Barnes, Cris William; Mocko, Michael Jeffrey

    This report is intended to examine the use of neutron resonance spectroscopy (NRS) to make time- dependent and spatially-resolved temperature measurements of materials in extreme conditions. Specifically, the sensitivities of the temperature estimate on neutron-beam and diagnostic parameters is examined. Based on that examination, requirements are set on a pulsed neutron-source and diagnostics to make a meaningful measurement.

  7. Optimizing Complexity Measures for fMRI Data: Algorithm, Artifact, and Sensitivity

    PubMed Central

    Rubin, Denis; Fekete, Tomer; Mujica-Parodi, Lilianne R.

    2013-01-01

    Introduction Complexity in the brain has been well-documented at both neuronal and hemodynamic scales, with increasing evidence supporting its use in sensitively differentiating between mental states and disorders. However, application of complexity measures to fMRI time-series, which are short, sparse, and have low signal/noise, requires careful modality-specific optimization. Methods Here we use both simulated and real data to address two fundamental issues: choice of algorithm and degree/type of signal processing. Methods were evaluated with regard to resilience to acquisition artifacts common to fMRI as well as detection sensitivity. Detection sensitivity was quantified in terms of grey-white matter contrast and overlap with activation. We additionally investigated the variation of complexity with activation and emotional content, optimal task length, and the degree to which results scaled with scanner using the same paradigm with two 3T magnets made by different manufacturers. Methods for evaluating complexity were: power spectrum, structure function, wavelet decomposition, second derivative, rescaled range, Higuchi’s estimate of fractal dimension, aggregated variance, and detrended fluctuation analysis. To permit direct comparison across methods, all results were normalized to Hurst exponents. Results Power-spectrum, Higuchi’s fractal dimension, and generalized Hurst exponent based estimates were most successful by all criteria; the poorest-performing measures were wavelet, detrended fluctuation analysis, aggregated variance, and rescaled range. Conclusions Functional MRI data have artifacts that interact with complexity calculations in nontrivially distinct ways compared to other physiological data (such as EKG, EEG) for which these measures are typically used. Our results clearly demonstrate that decisions regarding choice of algorithm, signal processing, time-series length, and scanner have a significant impact on the reliability and sensitivity of complexity estimates. PMID:23700424

  8. Cost Effectiveness of Ofatumumab Plus Chlorambucil in First-Line Chronic Lymphocytic Leukaemia in Canada.

    PubMed

    Herring, William; Pearson, Isobel; Purser, Molly; Nakhaipour, Hamid Reza; Haiderali, Amin; Wolowacz, Sorrel; Jayasundara, Kavisha

    2016-01-01

    Our objective was to estimate the cost effectiveness of ofatumumab plus chlorambucil (OChl) versus chlorambucil in patients with chronic lymphocytic leukaemia for whom fludarabine-based therapies are considered inappropriate from the perspective of the publicly funded healthcare system in Canada. A semi-Markov model (3-month cycle length) used survival curves to govern progression-free survival (PFS) and overall survival (OS). Efficacy and safety data and health-state utility values were estimated from the COMPLEMENT-1 trial. Post-progression treatment patterns were based on clinical guidelines, Canadian treatment practices and published literature. Total and incremental expected lifetime costs (in Canadian dollars [$Can], year 2013 values), life-years and quality-adjusted life-years (QALYs) were computed. Uncertainty was assessed via deterministic and probabilistic sensitivity analyses. The discounted lifetime health and economic outcomes estimated by the model showed that, compared with chlorambucil, first-line treatment with OChl led to an increase in QALYs (0.41) and total costs ($Can27,866) and to an incremental cost-effectiveness ratio (ICER) of $Can68,647 per QALY gained. In deterministic sensitivity analyses, the ICER was most sensitive to the modelling time horizon and to the extrapolation of OS treatment effects beyond the trial duration. In probabilistic sensitivity analysis, the probability of cost effectiveness at a willingness-to-pay threshold of $Can100,000 per QALY gained was 59 %. Base-case results indicated that improved overall response and PFS for OChl compared with chlorambucil translated to improved quality-adjusted life expectancy. Sensitivity analysis suggested that OChl is likely to be cost effective subject to uncertainty associated with the presence of any long-term OS benefit and the model time horizon.

  9. Cost-effectiveness analysis of quadrivalent influenza vaccination in at-risk adults and the elderly: an updated analysis in the U.K.

    PubMed

    Meier, G; Gregg, M; Poulsen Nautrup, B

    2015-01-01

    To update an earlier evaluation estimating the cost-effectiveness of quadrivalent influenza vaccination (QIV) compared with trivalent influenza vaccination (TIV) in the adult population currently recommended for influenza vaccination in the UK (all people aged ≥65 years and people aged 18-64 years with clinical risk conditions). This analysis takes into account updated vaccine prices, reference costs, influenza strain circulation, and burden of illness data. A lifetime, multi-cohort, static Markov model was constructed with seven age groups. The model was run in 1-year cycles for a lifetime, i.e., until the youngest patients at entry reached the age of 100 years. The base-case analysis was from the perspective of the UK National Health Service, with a secondary analysis from the societal perspective. Costs and benefits were discounted at 3.5%. Herd effects were not included. Inputs were derived from systematic reviews, peer-reviewed articles, and government publications and databases. One-way and probabilistic sensitivity analyses were performed. In the base-case, QIV would be expected to avoid 1,413,392 influenza cases, 41,780 hospitalizations, and 19,906 deaths over the lifetime horizon, compared with TIV. The estimated incremental cost-effectiveness ratio (ICER) was £14,645 per quality-adjusted life-year (QALY) gained. From the societal perspective, the estimated ICER was £13,497/QALY. A strategy of vaccinating only people aged ≥65 years had an estimated ICER of £11,998/QALY. Sensitivity analysis indicated that only two parameters, seasonal variation in influenza B matching and influenza A circulation, had a substantial effect on the ICER. QIV would be likely to be cost-effective compared with TIV in 68% of simulations with a willingness-to-pay threshold of <£20,000/QALY and 87% with a willingness-to-pay threshold of <£30,000/QALY. In this updated analysis, QIV was estimated to be cost-effective compared with TIV in the U.K.

  10. Orthopaedic trauma care in Haiti: a cost-effectiveness analysis of an innovative surgical residency program.

    PubMed

    Carlson, Lucas C; Slobogean, Gerard P; Pollak, Andrew N

    2012-01-01

    In an effort to sustainably strengthen orthopaedic trauma care in Haiti, a 2-year Orthopaedic Trauma Care Specialist (OTCS) program for Haitian physicians has been developed. The program will provide focused training in orthopaedic trauma surgery and fracture care utilizing a train-the-trainer approach. The purpose of this analysis was to calculate the cost-effectiveness of the program relative to its potential to decrease disability in the Haitian population. Using established methodology originally outlined in the World Health Organization's Global Burden of Disease project, a cost-effectiveness analysis was performed for the OTCS program in Haiti. Costs and disability-adjusted life-years (DALYs) averted were estimated per fellow trained in the OTCS program by using a 20-year career time horizon. Probabilistic sensitivity analysis was used to simultaneously test the joint uncertainty of the cost and averted DALY estimates. A willingness-to-pay threshold of $1200 per DALY averted, equal to the gross domestic product per capita in Haiti, was selected on the basis of World Health Organization's definition of highly cost-effective health interventions. The OTCS program results in an incremental cost of $1,542,544 ± $109,134 and 12,213 ± 2,983 DALYs averted per fellow trained. The cost-effectiveness ratio of $133.97 ± $34.71 per DALY averted is well below the threshold of $1200 per DALY averted. Furthermore, sensitivity analysis suggests that implementing the OTCS program is the economically preferred strategy with more than 95% probability at a willingness-to-pay threshold of $200 per DALY averted and across the entire range of potential variable inputs. The current economic analysis suggests the OTCS program to be a highly cost-effective intervention. Probabilistic sensitivity analysis demonstrates that the conclusions remain stable even when considering the joint uncertainty of the cost and DALY estimates. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  11. [Parameter sensitivity of simulating net primary productivity of Larix olgensis forest based on BIOME-BGC model].

    PubMed

    He, Li-hong; Wang, Hai-yan; Lei, Xiang-dong

    2016-02-01

    Model based on vegetation ecophysiological process contains many parameters, and reasonable parameter values will greatly improve simulation ability. Sensitivity analysis, as an important method to screen out the sensitive parameters, can comprehensively analyze how model parameters affect the simulation results. In this paper, we conducted parameter sensitivity analysis of BIOME-BGC model with a case study of simulating net primary productivity (NPP) of Larix olgensis forest in Wangqing, Jilin Province. First, with the contrastive analysis between field measurement data and the simulation results, we tested the BIOME-BGC model' s capability of simulating the NPP of L. olgensis forest. Then, Morris and EFAST sensitivity methods were used to screen the sensitive parameters that had strong influence on NPP. On this basis, we also quantitatively estimated the sensitivity of the screened parameters, and calculated the global, the first-order and the second-order sensitivity indices. The results showed that the BIOME-BGC model could well simulate the NPP of L. olgensis forest in the sample plot. The Morris sensitivity method provided a reliable parameter sensitivity analysis result under the condition of a relatively small sample size. The EFAST sensitivity method could quantitatively measure the impact of simulation result of a single parameter as well as the interaction between the parameters in BIOME-BGC model. The influential sensitive parameters for L. olgensis forest NPP were new stem carbon to new leaf carbon allocation and leaf carbon to nitrogen ratio, the effect of their interaction was significantly greater than the other parameter' teraction effect.

  12. Estimating the production, consumption and export of cannabis: The Dutch case.

    PubMed

    van der Giessen, Mark; van Ooyen-Houben, Marianne M J; Moolenaar, Debora E G

    2016-05-01

    Quantifying an illegal phenomenon like a drug market is inherently complex due to its hidden nature and the limited availability of reliable information. This article presents findings from a recent estimate of the production, consumption and export of Dutch cannabis and discusses the opportunities provided by, and limitations of, mathematical models for estimating the illegal cannabis market. The data collection consisted of a comprehensive literature study, secondary analyses on data from available registrations (2012-2014) and previous studies, and expert opinion. The cannabis market was quantified with several mathematical models. The data analysis included a Monte Carlo simulation to come to a 95% interval estimate (IE) and a sensitivity analysis to identify the most influential indicators. The annual production of Dutch cannabis was estimated to be between 171 and 965tons (95% IE of 271-613tons). The consumption was estimated to be between 28 and 119tons, depending on the inclusion or exclusion of non-residents (95% IE of 51-78tons or 32-49tons respectively). The export was estimated to be between 53 and 937tons (95% IE of 206-549tons or 231-573tons, respectively). Mathematical models are valuable tools for the systematic assessment of the size of illegal markets and determining the uncertainty inherent in the estimates. The estimates required the use of many assumptions and the availability of reliable indicators was limited. This uncertainty is reflected in the wide ranges of the estimates. The estimates are sensitive to 10 of the 45 indicators. These 10 account for 86-93% of the variation found. Further research should focus on improving the variables and the independence of the mathematical models. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Diagnostic accuracy of enzyme-linked immunosorbent assay (ELISA) and immunoblot (IB) for the detection of antibodies against Neospora caninum in milk from dairy cows.

    PubMed

    Chatziprodromidou, I P; Apostolou, T

    2018-04-01

    The aim of the study was to estimate the sensitivity and specificity of enzyme-linked immunosorbent assay (ELISA) and immunoblot (IB) for detecting antibodies of Neospora caninum in dairy cows, in the absence of a gold standard. The study complies with STRADAS-paratuberculosis guidelines for reporting the accuracy of the test. We tried to apply Bayesian models that do not require conditional independence of the tests under evaluation, but as convergence problems appeared, we used Bayesian methodology, that does not assume conditional dependence of the tests. Informative prior probability distributions were constructed, based on scientific inputs regarding sensitivity and specificity of the IB test and the prevalence of disease in the studied populations. IB sensitivity and specificity were estimated to be 98.8% and 91.3%, respectively, while the respective estimates for ELISA were 60% and 96.7%. A sensitivity analysis, where modified prior probability distributions concerning IB diagnostic accuracy applied, showed a limited effect in posterior assessments. We concluded that ELISA can be used to screen the bulk milk and secondly, IB can be used whenever needed.

  14. Performance of the high-sensitivity troponin assay in diagnosing acute myocardial infarction: systematic review and meta-analysis

    PubMed Central

    Al-Saleh, Ayman; Alazzoni, Ashraf; Al Shalash, Saleh; Ye, Chenglin; Mbuagbaw, Lawrence; Thabane, Lehana; Jolly, Sanjit S.

    2014-01-01

    Background High-sensitivity cardiac troponin assays have been adopted by many clinical centres worldwide; however, clinicians are uncertain how to interpret the results. We sought to assess the utility of these assays in diagnosing acute myocardial infarction (MI). Methods We carried out a systematic review and meta-analysis of studies comparing high-sensitivity with conventional assays of cardiac troponin levels among adults with suspected acute MI in the emergency department. We searched MEDLINE, EMBASE and Cochrane databases up to April 2013 and used bivariable random-effects modelling to obtain summary parameters for diagnostic accuracy. Results We identified 9 studies that assessed the use of high-sensitivity troponin T assays (n = 9186 patients). The summary sensitivity of these tests in diagnosing acute MI at presentation to the emergency department was estimated to be 0.94 (95% confidence interval [CI] 0.89–0.97); for conventional tests, it was 0.72 (95% CI 0.63–0.79). The summary specificity was 0.73 (95% CI 0.64–0.81) for the high-sensitivity assay compared with 0.95 (95% CI 0.93–0.97) for the conventional assay. The differences in estimates of the summary sensitivity and specificity between the high-sensitivity and conventional assays were statistically significant (p < 0.01). The area under the curve was similar for both tests carried out 3–6 hours after presentation. Three studies assessed the use of high-sensitivity troponin I assays and showed similar results. Interpretation Used at presentation to the emergency department, the high-sensitivity cardiac troponin assay has improved sensitivity, but reduced specificity, compared with the conventional troponin assay. With repeated measurements over 6 hours, the area under the curve is similar for both tests, indicating that the major advantage of the high-sensitivity test is early diagnosis. PMID:25295240

  15. Health impact assessment of a skin sensitizer: Analysis of potential policy measures aimed at reducing geraniol concentrations in personal care products and household cleaning products.

    PubMed

    Jongeneel, W P; Delmaar, J E; Bokkers, B G H

    2018-06-08

    A methodology to assess the health impact of skin sensitizers is introduced, which consists of the comparison of the probabilistic aggregated exposure with a probabilistic (individual) human sensitization or elicitation induction dose. The health impact of potential policy measures aimed at reducing the concentration of a fragrance allergen, geraniol, in consumer products is analysed in a simulated population derived from multiple product use surveys. Our analysis shows that current dermal exposure to geraniol from personal care and household cleaning products lead to new cases of contact allergy and induce clinical symptoms for those already sensitized. We estimate that this exposure results yearly in 34 new cases of geraniol contact allergy per million consumers in Western and Northern Europe, mainly due to exposure to household cleaning products. About twice as many consumers (60 per million) are projected to suffer from clinical symptoms due to re-exposure to geraniol. Policy measures restricting geraniol concentrations to <0.01% will noticeably reduce new cases of sensitization and decrease the number of people with clinical symptoms as well as the frequency of occurrence of these clinical symptoms. The estimated numbers should be interpreted with caution and provide only a rough indication of the health impact. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. A Fiscal Analysis of Fixed-Amount Federal Grants-in-Aid: The Case of Vocational Education.

    ERIC Educational Resources Information Center

    Patterson, Philip D., Jr.

    A fiscal analysis of fixed-amount Federal grant programs using the criteria of effectiveness, efficiency, and equity is essential to an evaluation of the Federal grant structure. Measures of program need should be current, comparable over time and among states, and subjected to sensitivity analysis so that future grants can be estimated. Income…

  17. Simultaneous Analysis and Quality Assurance for Diffusion Tensor Imaging

    PubMed Central

    Lauzon, Carolyn B.; Asman, Andrew J.; Esparza, Michael L.; Burns, Scott S.; Fan, Qiuyun; Gao, Yurui; Anderson, Adam W.; Davis, Nicole; Cutting, Laurie E.; Landman, Bennett A.

    2013-01-01

    Diffusion tensor imaging (DTI) enables non-invasive, cyto-architectural mapping of in vivo tissue microarchitecture through voxel-wise mathematical modeling of multiple magnetic resonance imaging (MRI) acquisitions, each differently sensitized to water diffusion. DTI computations are fundamentally estimation processes and are sensitive to noise and artifacts. Despite widespread adoption in the neuroimaging community, maintaining consistent DTI data quality remains challenging given the propensity for patient motion, artifacts associated with fast imaging techniques, and the possibility of hardware changes/failures. Furthermore, the quantity of data acquired per voxel, the non-linear estimation process, and numerous potential use cases complicate traditional visual data inspection approaches. Currently, quality inspection of DTI data has relied on visual inspection and individual processing in DTI analysis software programs (e.g. DTIPrep, DTI-studio). However, recent advances in applied statistical methods have yielded several different metrics to assess noise level, artifact propensity, quality of tensor fit, variance of estimated measures, and bias in estimated measures. To date, these metrics have been largely studied in isolation. Herein, we select complementary metrics for integration into an automatic DTI analysis and quality assurance pipeline. The pipeline completes in 24 hours, stores statistical outputs, and produces a graphical summary quality analysis (QA) report. We assess the utility of this streamlined approach for empirical quality assessment on 608 DTI datasets from pediatric neuroimaging studies. The efficiency and accuracy of quality analysis using the proposed pipeline is compared with quality analysis based on visual inspection. The unified pipeline is found to save a statistically significant amount of time (over 70%) while improving the consistency of QA between a DTI expert and a pool of research associates. Projection of QA metrics to a low dimensional manifold reveal qualitative, but clear, QA-study associations and suggest that automated outlier/anomaly detection would be feasible. PMID:23637895

  18. Simultaneous analysis and quality assurance for diffusion tensor imaging.

    PubMed

    Lauzon, Carolyn B; Asman, Andrew J; Esparza, Michael L; Burns, Scott S; Fan, Qiuyun; Gao, Yurui; Anderson, Adam W; Davis, Nicole; Cutting, Laurie E; Landman, Bennett A

    2013-01-01

    Diffusion tensor imaging (DTI) enables non-invasive, cyto-architectural mapping of in vivo tissue microarchitecture through voxel-wise mathematical modeling of multiple magnetic resonance imaging (MRI) acquisitions, each differently sensitized to water diffusion. DTI computations are fundamentally estimation processes and are sensitive to noise and artifacts. Despite widespread adoption in the neuroimaging community, maintaining consistent DTI data quality remains challenging given the propensity for patient motion, artifacts associated with fast imaging techniques, and the possibility of hardware changes/failures. Furthermore, the quantity of data acquired per voxel, the non-linear estimation process, and numerous potential use cases complicate traditional visual data inspection approaches. Currently, quality inspection of DTI data has relied on visual inspection and individual processing in DTI analysis software programs (e.g. DTIPrep, DTI-studio). However, recent advances in applied statistical methods have yielded several different metrics to assess noise level, artifact propensity, quality of tensor fit, variance of estimated measures, and bias in estimated measures. To date, these metrics have been largely studied in isolation. Herein, we select complementary metrics for integration into an automatic DTI analysis and quality assurance pipeline. The pipeline completes in 24 hours, stores statistical outputs, and produces a graphical summary quality analysis (QA) report. We assess the utility of this streamlined approach for empirical quality assessment on 608 DTI datasets from pediatric neuroimaging studies. The efficiency and accuracy of quality analysis using the proposed pipeline is compared with quality analysis based on visual inspection. The unified pipeline is found to save a statistically significant amount of time (over 70%) while improving the consistency of QA between a DTI expert and a pool of research associates. Projection of QA metrics to a low dimensional manifold reveal qualitative, but clear, QA-study associations and suggest that automated outlier/anomaly detection would be feasible.

  19. UCODE_2005 and six other computer codes for universal sensitivity analysis, calibration, and uncertainty evaluation constructed using the JUPITER API

    USGS Publications Warehouse

    Poeter, Eileen E.; Hill, Mary C.; Banta, Edward R.; Mehl, Steffen; Christensen, Steen

    2006-01-01

    This report documents the computer codes UCODE_2005 and six post-processors. Together the codes can be used with existing process models to perform sensitivity analysis, data needs assessment, calibration, prediction, and uncertainty analysis. Any process model or set of models can be used; the only requirements are that models have numerical (ASCII or text only) input and output files, that the numbers in these files have sufficient significant digits, that all required models can be run from a single batch file or script, and that simulated values are continuous functions of the parameter values. Process models can include pre-processors and post-processors as well as one or more models related to the processes of interest (physical, chemical, and so on), making UCODE_2005 extremely powerful. An estimated parameter can be a quantity that appears in the input files of the process model(s), or a quantity used in an equation that produces a value that appears in the input files. In the latter situation, the equation is user-defined. UCODE_2005 can compare observations and simulated equivalents. The simulated equivalents can be any simulated value written in the process-model output files or can be calculated from simulated values with user-defined equations. The quantities can be model results, or dependent variables. For example, for ground-water models they can be heads, flows, concentrations, and so on. Prior, or direct, information on estimated parameters also can be considered. Statistics are calculated to quantify the comparison of observations and simulated equivalents, including a weighted least-squares objective function. In addition, data-exchange files are produced that facilitate graphical analysis. UCODE_2005 can be used fruitfully in model calibration through its sensitivity analysis capabilities and its ability to estimate parameter values that result in the best possible fit to the observations. Parameters are estimated using nonlinear regression: a weighted least-squares objective function is minimized with respect to the parameter values using a modified Gauss-Newton method or a double-dogleg technique. Sensitivities needed for the method can be read from files produced by process models that can calculate sensitivities, such as MODFLOW-2000, or can be calculated by UCODE_2005 using a more general, but less accurate, forward- or central-difference perturbation technique. Problems resulting from inaccurate sensitivities and solutions related to the perturbation techniques are discussed in the report. Statistics are calculated and printed for use in (1) diagnosing inadequate data and identifying parameters that probably cannot be estimated; (2) evaluating estimated parameter values; and (3) evaluating how well the model represents the simulated processes. Results from UCODE_2005 and codes RESIDUAL_ANALYSIS and RESIDUAL_ANALYSIS_ADV can be used to evaluate how accurately the model represents the processes it simulates. Results from LINEAR_UNCERTAINTY can be used to quantify the uncertainty of model simulated values if the model is sufficiently linear. Results from MODEL_LINEARITY and MODEL_LINEARITY_ADV can be used to evaluate model linearity and, thereby, the accuracy of the LINEAR_UNCERTAINTY results. UCODE_2005 can also be used to calculate nonlinear confidence and predictions intervals, which quantify the uncertainty of model simulated values when the model is not linear. CORFAC_PLUS can be used to produce factors that allow intervals to account for model intrinsic nonlinearity and small-scale variations in system characteristics that are not explicitly accounted for in the model or the observation weighting. The six post-processing programs are independent of UCODE_2005 and can use the results of other programs that produce the required data-exchange files. UCODE_2005 and the other six codes are intended for use on any computer operating system. The programs con

  20. Sensitivity Analysis of Multiple Informant Models When Data are Not Missing at Random

    PubMed Central

    Blozis, Shelley A.; Ge, Xiaojia; Xu, Shu; Natsuaki, Misaki N.; Shaw, Daniel S.; Neiderhiser, Jenae; Scaramella, Laura; Leve, Leslie; Reiss, David

    2014-01-01

    Missing data are common in studies that rely on multiple informant data to evaluate relationships among variables for distinguishable individuals clustered within groups. Estimation of structural equation models using raw data allows for incomplete data, and so all groups may be retained even if only one member of a group contributes data. Statistical inference is based on the assumption that data are missing completely at random or missing at random. Importantly, whether or not data are missing is assumed to be independent of the missing data. A saturated correlates model that incorporates correlates of the missingness or the missing data into an analysis and multiple imputation that may also use such correlates offer advantages over the standard implementation of SEM when data are not missing at random because these approaches may result in a data analysis problem for which the missingness is ignorable. This paper considers these approaches in an analysis of family data to assess the sensitivity of parameter estimates to assumptions about missing data, a strategy that may be easily implemented using SEM software. PMID:25221420

  1. Simulation studies of wide and medium field of view earth radiation data analysis

    NASA Technical Reports Server (NTRS)

    Green, R. N.

    1978-01-01

    A parameter estimation technique is presented to estimate the radiative flux distribution over the earth from radiometer measurements at satellite altitude. The technique analyzes measurements from a wide field of view (WFOV), horizon to horizon, nadir pointing sensor with a mathematical technique to derive the radiative flux estimates at the top of the atmosphere for resolution elements smaller than the sensor field of view. A computer simulation of the data analysis technique is presented for both earth-emitted and reflected radiation. Zonal resolutions are considered as well as the global integration of plane flux. An estimate of the equator-to-pole gradient is obtained from the zonal estimates. Sensitivity studies of the derived flux distribution to directional model errors are also presented. In addition to the WFOV results, medium field of view results are presented.

  2. A method to estimate weight and dimensions of large and small gas turbine engines

    NASA Technical Reports Server (NTRS)

    Onat, E.; Klees, G. W.

    1979-01-01

    A computerized method was developed to estimate weight and envelope dimensions of large and small gas turbine engines within + or - 5% to 10%. The method is based on correlations of component weight and design features of 29 data base engines. Rotating components were estimated by a preliminary design procedure which is sensitive to blade geometry, operating conditions, material properties, shaft speed, hub tip ratio, etc. The development and justification of the method selected, and the various methods of analysis are discussed.

  3. Estimating effective data density in a satellite retrieval or an objective analysis

    NASA Technical Reports Server (NTRS)

    Purser, R. J.; Huang, H.-L.

    1993-01-01

    An attempt is made to formulate consistent objective definitions of the concept of 'effective data density' applicable both in the context of satellite soundings and more generally in objective data analysis. The definitions based upon various forms of Backus-Gilbert 'spread' functions are found to be seriously misleading in satellite soundings where the model resolution function (expressing the sensitivity of retrieval or analysis to changes in the background error) features sidelobes. Instead, estimates derived by smoothing the trace components of the model resolution function are proposed. The new estimates are found to be more reliable and informative in simulated satellite retrieval problems and, for the special case of uniformly spaced perfect observations, agree exactly with their actual density. The new estimates integrate to the 'degrees of freedom for signal', a diagnostic that is invariant to changes of units or coordinates used.

  4. Simple Electrolyzer Model Development for High-Temperature Electrolysis System Analysis Using Solid Oxide Electrolysis Cell

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    JaeHwa Koh; DuckJoo Yoon; Chang H. Oh

    2010-07-01

    An electrolyzer model for the analysis of a hydrogen-production system using a solid oxide electrolysis cell (SOEC) has been developed, and the effects for principal parameters have been estimated by sensitivity studies based on the developed model. The main parameters considered are current density, area specific resistance, temperature, pressure, and molar fraction and flow rates in the inlet and outlet. Finally, a simple model for a high-temperature hydrogen-production system using the solid oxide electrolysis cell integrated with very high temperature reactors is estimated.

  5. Immunization against Haemophilus Influenzae Type b in Iran; Cost-utility and Cost-benefit Analyses

    PubMed Central

    Moradi-Lakeh, Maziar; Shakerian, Sareh; Esteghamati, Abdoulreza

    2012-01-01

    Background: Haemophilus Influenzae type b (Hib) is an important cause of morbidity and mortality in children. Although its burden is considerably preventable by vaccine, routine vaccination against Hib has not been defined in the National Immunization Program of Iran. This study was performed to assess the cost-benefit and cost-utility of running an Hib vaccination program in Iran. Methods: Based on a previous systematic review and meta-analysis for vaccine efficacy, we estimated the averted DALYs (Disability adjusted life years) and cost-benefit of vaccination. Different acute invasive forms of Hib infection and the permanent sequels were considered for estimating the attributed DALYs. We used a societal perspective for economic evaluation and included both direct and indirect costs of alternative options about vaccination. An annual discount rate of 3% and standard age-weighting were used for estimation. To assess the robustness of the results, a sensitivity analysis was performed. Results: The incidence of Hib infection was estimated 43.0 per 100000, which can be reduced to 6.7 by vaccination. Total costs of vaccination were estimated at US$ 15,538,129. Routine vaccination of the 2008 birth cohort would prevent 4079 DALYs at a cost per averted-DALY of US$ 4535. If we consider parents’ loss of income and future productivity loss of children, it would save US$ 8,991,141, with a benefit-cost ratio of 2.14 in the base-case analysis. Sensitivity analysis showed a range of 0.78 to 3.14 for benefit-to-cost ratios. Conclusion: Considering costs per averted DALY, vaccination against Hib is a cost-effective health intervention in Iran, and allocating resources for routine vaccination against Hib seems logical. PMID:22708030

  6. U.S. Balance-of-Station Cost Drivers and Sensitivities (Presentation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maples, B.

    2012-10-01

    With balance-of-system (BOS) costs contributing up to 70% of the installed capital cost, it is fundamental to understanding the BOS costs for offshore wind projects as well as potential cost trends for larger offshore turbines. NREL developed a BOS model using project cost estimates developed by GL Garrad Hassan. Aspects of BOS covered include engineering and permitting, ports and staging, transportation and installation, vessels, foundations, and electrical. The data introduce new scaling relationships for each BOS component to estimate cost as a function of turbine parameters and size, project parameters and size, and soil type. Based on the new BOSmore » model, an analysis to understand the non‐turbine costs has been conducted. This analysis establishes a more robust baseline cost estimate, identifies the largest cost components of offshore wind project BOS, and explores the sensitivity of the levelized cost of energy to permutations in each BOS cost element. This presentation shows results from the model that illustrates the potential impact of turbine size and project size on the cost of energy from U.S. offshore wind plants.« less

  7. Detection of Echinococcus multilocularis by MC-PCR: evaluation of diagnostic sensitivity and specificity without gold standard.

    PubMed

    Wahlström, Helene; Comin, Arianna; Isaksson, Mats; Deplazes, Peter

    2016-01-01

    A semi-automated magnetic capture probe-based DNA extraction and real-time PCR method (MC-PCR), allowing for a more efficient large-scale surveillance of Echinococcus multilocularis occurrence, has been developed. The test sensitivity has previously been evaluated using the sedimentation and counting technique (SCT) as a gold standard. However, as the sensitivity of the SCT is not 1, test characteristics of the MC-PCR was also evaluated using latent class analysis, a methodology not requiring a gold standard. Test results, MC-PCR and SCT, from a previous evaluation of the MC-PCR using 177 foxes shot in the spring (n=108) and autumn 2012 (n=69) in high prevalence areas in Switzerland were used. Latent class analysis was used to estimate the test characteristics of the MC-PCR. Although it is not the primary aim of this study, estimates of the test characteristics of the SCT were also obtained. This study showed that the sensitivity of the MC-PCR was 0.88 [95% posterior credible interval (PCI) 0.80-0.93], which was not significantly different than the SCT, 0.83 (95% PCI 0.76-0.88), which is currently considered as the gold standard. The specificity of both tests was high, 0.98 (95% PCI 0.94-0.99) for the MC-PCR and 0.99 (95% PCI 0.99-1) for the SCT. In a previous study, using fox scats from a low prevalence area, the specificity of the MC-PCR was higher, 0.999% (95% PCI 0.997-1). One reason for the lower estimate of the specificity in this study could be that the MC-PCR detects DNA from infected but non-infectious rodents eaten by foxes. When using MC-PCR in low prevalence areas or areas free from the parasite, a positive result in the MC-PCR should be regarded as a true positive. The sensitivity of the MC-PCR (0.88) was comparable to the sensitivity of SCT (0.83).

  8. Evaluation of microarray data normalization procedures using spike-in experiments

    PubMed Central

    Rydén, Patrik; Andersson, Henrik; Landfors, Mattias; Näslund, Linda; Hartmanová, Blanka; Noppa, Laila; Sjöstedt, Anders

    2006-01-01

    Background Recently, a large number of methods for the analysis of microarray data have been proposed but there are few comparisons of their relative performances. By using so-called spike-in experiments, it is possible to characterize the analyzed data and thereby enable comparisons of different analysis methods. Results A spike-in experiment using eight in-house produced arrays was used to evaluate established and novel methods for filtration, background adjustment, scanning, channel adjustment, and censoring. The S-plus package EDMA, a stand-alone tool providing characterization of analyzed cDNA-microarray data obtained from spike-in experiments, was developed and used to evaluate 252 normalization methods. For all analyses, the sensitivities at low false positive rates were observed together with estimates of the overall bias and the standard deviation. In general, there was a trade-off between the ability of the analyses to identify differentially expressed genes (i.e. the analyses' sensitivities) and their ability to provide unbiased estimators of the desired ratios. Virtually all analysis underestimated the magnitude of the regulations; often less than 50% of the true regulations were observed. Moreover, the bias depended on the underlying mRNA-concentration; low concentration resulted in high bias. Many of the analyses had relatively low sensitivities, but analyses that used either the constrained model (i.e. a procedure that combines data from several scans) or partial filtration (a novel method for treating data from so-called not-found spots) had with few exceptions high sensitivities. These methods gave considerable higher sensitivities than some commonly used analysis methods. Conclusion The use of spike-in experiments is a powerful approach for evaluating microarray preprocessing procedures. Analyzed data are characterized by properties of the observed log-ratios and the analysis' ability to detect differentially expressed genes. If bias is not a major problem; we recommend the use of either the CM-procedure or partial filtration. PMID:16774679

  9. Near-Earth Object Astrometric Interferometry

    NASA Technical Reports Server (NTRS)

    Werner, Martin R.

    2005-01-01

    Using astrometric interferometry on near-Earth objects (NEOs) poses many interesting and difficult challenges. Poor reflectance properties and potentially no significant active emissions lead to NEOs having intrinsically low visual magnitudes. Using worst case estimates for signal reflection properties leads to NEOs having visual magnitudes of 27 and higher. Today the most sensitive interferometers in operation have limiting magnitudes of 20 or less. The main reason for this limit is due to the atmosphere, where turbulence affects the light coming from the target, limiting the sensitivity of the interferometer. In this analysis, the interferometer designs assume no atmosphere, meaning they would be placed at a location somewhere in space. Interferometer configurations and operational uncertainties are looked at in order to parameterize the requirements necessary to achieve measurements of low visual magnitude NEOs. This analysis provides a preliminary estimate of what will be required in order to take high resolution measurements of these objects using interferometry techniques.

  10. An Innovative Technique to Assess Spontaneous Baroreflex Sensitivity with Short Data Segments: Multiple Trigonometric Regressive Spectral Analysis.

    PubMed

    Li, Kai; Rüdiger, Heinz; Haase, Rocco; Ziemssen, Tjalf

    2018-01-01

    Objective: As the multiple trigonometric regressive spectral (MTRS) analysis is extraordinary in its ability to analyze short local data segments down to 12 s, we wanted to evaluate the impact of the data segment settings by applying the technique of MTRS analysis for baroreflex sensitivity (BRS) estimation using a standardized data pool. Methods: Spectral and baroreflex analyses were performed on the EuroBaVar dataset (42 recordings, including lying and standing positions). For this analysis, the technique of MTRS was used. We used different global and local data segment lengths, and chose the global data segments from different positions. Three global data segments of 1 and 2 min and three local data segments of 12, 20, and 30 s were used in MTRS analysis for BRS. Results: All the BRS-values calculated on the three global data segments were highly correlated, both in the supine and standing positions; the different global data segments provided similar BRS estimations. When using different local data segments, all the BRS-values were also highly correlated. However, in the supine position, using short local data segments of 12 s overestimated BRS compared with those using 20 and 30 s. In the standing position, the BRS estimations using different local data segments were comparable. There was no proportional bias for the comparisons between different BRS estimations. Conclusion: We demonstrate that BRS estimation by the MTRS technique is stable when using different global data segments, and MTRS is extraordinary in its ability to evaluate BRS in even short local data segments (20 and 30 s). Because of the non-stationary character of most biosignals, the MTRS technique would be preferable for BRS analysis especially in conditions when only short stationary data segments are available or when dynamic changes of BRS should be monitored.

  11. Understanding Treatment Effect Estimates When Treatment Effects Are Heterogeneous for More Than One Outcome.

    PubMed

    Brooks, John M; Chapman, Cole G; Schroeder, Mary C

    2018-06-01

    Patient-centred care requires evidence of treatment effects across many outcomes. Outcomes can be beneficial (e.g. increased survival or cure rates) or detrimental (e.g. adverse events, pain associated with treatment, treatment costs, time required for treatment). Treatment effects may also be heterogeneous across outcomes and across patients. Randomized controlled trials are usually insufficient to supply evidence across outcomes. Observational data analysis is an alternative, with the caveat that the treatments observed are choices. Real-world treatment choice often involves complex assessment of expected effects across the array of outcomes. Failure to account for this complexity when interpreting treatment effect estimates could lead to clinical and policy mistakes. Our objective was to assess the properties of treatment effect estimates based on choice when treatments have heterogeneous effects on both beneficial and detrimental outcomes across patients. Simulation methods were used to highlight the sensitivity of treatment effect estimates to the distributions of treatment effects across patients across outcomes. Scenarios with alternative correlations between benefit and detriment treatment effects across patients were used. Regression and instrumental variable estimators were applied to the simulated data for both outcomes. True treatment effect parameters are sensitive to the relationships of treatment effectiveness across outcomes in each study population. In each simulation scenario, treatment effect estimate interpretations for each outcome are aligned with results shown previously in single outcome models, but these estimates vary across simulated populations with the correlations of treatment effects across patients across outcomes. If estimator assumptions are valid, estimates across outcomes can be used to assess the optimality of treatment rates in a study population. However, because true treatment effect parameters are sensitive to correlations of treatment effects across outcomes, decision makers should be cautious about generalizing estimates to other populations.

  12. Serological testing versus other strategies for diagnosis of active tuberculosis in India: a cost-effectiveness analysis.

    PubMed

    Dowdy, David W; Steingart, Karen R; Pai, Madhukar

    2011-08-01

    Undiagnosed and misdiagnosed tuberculosis (TB) drives the epidemic in India. Serological (antibody detection) TB tests are not recommended by any agency, but widely used in many countries, including the Indian private sector. The cost and impact of using serology compared with other diagnostic techniques is unknown. Taking a patient cohort conservatively equal to the annual number of serological tests done in India (1.5 million adults suspected of having active TB), we used decision analysis to estimate costs and effectiveness of sputum smear microscopy (US$3.62 for two smears), microscopy plus automated liquid culture (mycobacterium growth indicator tube [MGIT], US$20/test), and serological testing (anda-tb ELISA, US$20/test). Data on test accuracy and costs were obtained from published literature. We adopted the perspective of the Indian TB control sector and an analysis frame of 1 year. Our primary outcome was the incremental cost per disability-adjusted life year (DALY) averted. We performed one-way sensitivity analysis on all model parameters, with multiway sensitivity analysis on variables to which the model was most sensitive. If used instead of sputum microscopy, serology generated an estimated 14,000 more TB diagnoses, but also 121,000 more false-positive diagnoses, 102,000 fewer DALYs averted, and 32,000 more secondary TB cases than microscopy, at approximately four times the incremental cost (US$47.5 million versus US$11.9 million). When added to high-quality sputum smears, MGIT culture was estimated to avert 130,000 incremental DALYs at an incremental cost of US$213 per DALY averted. Serology was dominated by (i.e., more costly and less effective than) MGIT culture and remained less economically favorable than sputum smear or TB culture in one-way and multiway sensitivity analyses. In India, sputum smear microscopy remains the most cost-effective diagnostic test available for active TB; efforts to increase access to quality-assured microscopy should take priority. In areas where high-quality microscopy exists and resources are sufficient, MGIT culture is more cost-effective than serology as an additional diagnostic test for TB. These data informed a recently published World Health Organization policy statement against serological tests.

  13. Meta-Analyses of Diagnostic Accuracy in Imaging Journals: Analysis of Pooling Techniques and Their Effect on Summary Estimates of Diagnostic Accuracy.

    PubMed

    McGrath, Trevor A; McInnes, Matthew D F; Korevaar, Daniël A; Bossuyt, Patrick M M

    2016-10-01

    Purpose To determine whether authors of systematic reviews of diagnostic accuracy studies published in imaging journals used recommended methods for meta-analysis, and to evaluate the effect of traditional methods on summary estimates of sensitivity and specificity. Materials and Methods Medline was searched for published systematic reviews that included meta-analysis of test accuracy data limited to imaging journals published from January 2005 to May 2015. Two reviewers independently extracted study data and classified methods for meta-analysis as traditional (univariate fixed- or random-effects pooling or summary receiver operating characteristic curve) or recommended (bivariate model or hierarchic summary receiver operating characteristic curve). Use of methods was analyzed for variation with time, geographical location, subspecialty, and journal. Results from reviews in which study authors used traditional univariate pooling methods were recalculated with a bivariate model. Results Three hundred reviews met the inclusion criteria, and in 118 (39%) of those, authors used recommended meta-analysis methods. No change in the method used was observed with time (r = 0.54, P = .09); however, there was geographic (χ(2) = 15.7, P = .001), subspecialty (χ(2) = 46.7, P < .001), and journal (χ(2) = 27.6, P < .001) heterogeneity. Fifty-one univariate random-effects meta-analyses were reanalyzed with the bivariate model; the average change in the summary estimate was -1.4% (P < .001) for sensitivity and -2.5% (P < .001) for specificity. The average change in width of the confidence interval was 7.7% (P < .001) for sensitivity and 9.9% (P ≤ .001) for specificity. Conclusion Recommended methods for meta-analysis of diagnostic accuracy in imaging journals are used in a minority of reviews; this has not changed significantly with time. Traditional (univariate) methods allow overestimation of diagnostic accuracy and provide narrower confidence intervals than do recommended (bivariate) methods. (©) RSNA, 2016 Online supplemental material is available for this article.

  14. General methods for sensitivity analysis of equilibrium dynamics in patch occupancy models

    USGS Publications Warehouse

    Miller, David A.W.

    2012-01-01

    Sensitivity analysis is a useful tool for the study of ecological models that has many potential applications for patch occupancy modeling. Drawing from the rich foundation of existing methods for Markov chain models, I demonstrate new methods for sensitivity analysis of the equilibrium state dynamics of occupancy models. Estimates from three previous studies are used to illustrate the utility of the sensitivity calculations: a joint occupancy model for a prey species, its predators, and habitat used by both; occurrence dynamics from a well-known metapopulation study of three butterfly species; and Golden Eagle occupancy and reproductive dynamics. I show how to deal efficiently with multistate models and how to calculate sensitivities involving derived state variables and lower-level parameters. In addition, I extend methods to incorporate environmental variation by allowing for spatial and temporal variability in transition probabilities. The approach used here is concise and general and can fully account for environmental variability in transition parameters. The methods can be used to improve inferences in occupancy studies by quantifying the effects of underlying parameters, aiding prediction of future system states, and identifying priorities for sampling effort.

  15. A single-index threshold Cox proportional hazard model for identifying a treatment-sensitive subset based on multiple biomarkers.

    PubMed

    He, Ye; Lin, Huazhen; Tu, Dongsheng

    2018-06-04

    In this paper, we introduce a single-index threshold Cox proportional hazard model to select and combine biomarkers to identify patients who may be sensitive to a specific treatment. A penalized smoothed partial likelihood is proposed to estimate the parameters in the model. A simple, efficient, and unified algorithm is presented to maximize this likelihood function. The estimators based on this likelihood function are shown to be consistent and asymptotically normal. Under mild conditions, the proposed estimators also achieve the oracle property. The proposed approach is evaluated through simulation analyses and application to the analysis of data from two clinical trials, one involving patients with locally advanced or metastatic pancreatic cancer and one involving patients with resectable lung cancer. Copyright © 2018 John Wiley & Sons, Ltd.

  16. Properties of perimetric threshold estimates from Full Threshold, SITA Standard, and SITA Fast strategies.

    PubMed

    Artes, Paul H; Iwase, Aiko; Ohno, Yuko; Kitazawa, Yoshiaki; Chauhan, Balwantray C

    2002-08-01

    To investigate the distributions of threshold estimates with the Swedish Interactive Threshold Algorithms (SITA) Standard, SITA Fast, and the Full Threshold algorithm (Humphrey Field Analyzer; Zeiss-Humphrey Instruments, Dublin, CA) and to compare the pointwise test-retest variability of these strategies. One eye of 49 patients (mean age, 61.6 years; range, 22-81) with glaucoma (Mean Deviation mean, -7.13 dB; range, +1.8 to -23.9 dB) was examined four times with each of the three strategies. The mean and median SITA Standard and SITA Fast threshold estimates were compared with a "best available" estimate of sensitivity (mean results of three Full Threshold tests). Pointwise 90% retest limits (5th and 95th percentiles of retest thresholds) were derived to assess the reproducibility of individual threshold estimates. The differences between the threshold estimates of the SITA and Full Threshold strategies were largest ( approximately 3 dB) for midrange sensitivities ( approximately 15 dB). The threshold distributions of SITA were considerably different from those of the Full Threshold strategy. The differences remained of similar magnitude when the analysis was repeated on a subset of 20 locations that are examined early during the course of a Full Threshold examination. With sensitivities above 25 dB, both SITA strategies exhibited lower test-retest variability than the Full Threshold strategy. Below 25 dB, the retest intervals of SITA Standard were slightly smaller than those of the Full Threshold strategy, whereas those of SITA Fast were larger. SITA Standard may be superior to the Full Threshold strategy for monitoring patients with visual field loss. The greater test-retest variability of SITA Fast in areas of low sensitivity is likely to offset the benefit of even shorter test durations with this strategy. The sensitivity differences between the SITA and Full Threshold strategies may relate to factors other than reduced fatigue. They are, however, small in comparison to the test-retest variability.

  17. Bacteriophage-based assays for the rapid detection of rifampicin resistance in Mycobacterium tuberculosis: a meta-analysis.

    PubMed

    Pai, Madhukar; Kalantri, Shriprakash; Pascopella, Lisa; Riley, Lee W; Reingold, Arthur L

    2005-10-01

    To summarize, using meta-analysis, the accuracy of bacteriophage-based assays for the detection of rifampicin resistance in Mycobacterium tuberculosis. By searching multiple databases and sources we identified a total of 21 studies eligible for meta-analysis. Of these, 14 studies used phage amplification assays (including eight studies on the commercial FASTPlaque-TB kits), and seven used luciferase reporter phage (LRP) assays. Sensitivity, specificity, and agreement between phage assay and reference standard (e.g. agar proportion method or BACTEC 460) results were the main outcomes of interest. When performed on culture isolates (N=19 studies), phage assays appear to have relatively high sensitivity and specificity. Eleven of 19 (58%) studies reported sensitivity and specificity estimates > or =95%, and 13 of 19 (68%) studies reported > or =95% agreement with reference standard results. Specificity estimates were slightly lower and more variable than sensitivity; 5 of 19 (26%) studies reported specificity <90%. Only two studies performed phage assays directly on sputum specimens; although one study reported sensitivity and specificity of 100 and 99%, respectively, another reported sensitivity of 86% and specificity of 73%. Current evidence is largely restricted to the use of phage assays for the detection of rifampicin resistance in culture isolates. When used on culture isolates, these assays appear to have high sensitivity, but variable and slightly lower specificity. In contrast, evidence is lacking on the accuracy of these assays when they are directly applied to sputum specimens. If phage-based assays can be directly used on clinical specimens and if they are shown to have high accuracy, they have the potential to improve the diagnosis of MDR-TB. However, before phage assays can be successfully used in routine practice, several concerns have to be addressed, including unexplained false positives in some studies, potential for contamination and indeterminate results.

  18. Comparative Diagnostic Performance of Ultrasonography and 99mTc-Sestamibi Scintigraphy for Parathyroid Adenoma in Primary Hyperparathyroidism; Systematic Review and Meta- Analysis

    PubMed

    Nafisi Moghadam, Reza; Amlelshahbaz, Amir Pasha; Namiranian, Nasim; Sobhan-Ardekani, Mohammad; Emami-Meybodi, Mahmood; Dehghan, Ali; Rahmanian, Masoud; Razavi-Ratki, Seid Kazem

    2017-12-28

    Objective: Ultrasonography (US) and parathyroid scintigraphy (PS) with 99mTc-MIBI are common methods for preoperative localization of parathyroid adenomas but there discrepancies exist with regard to diagnostic accuracy. The aim of the study was to compare PS and US for localization of parathyroid adenoma with a systematic review and meta-analysis of the literature. Methods: Pub Med, Scopus (EMbase), Web of Science and the reference lists of all included studies were searched up to 1st January 2016. The search strategy was according PICO characteristics. Heterogeneity between the studies was accounted by P < 0.1. Point estimates were pooled estimate of sensitivity, specificity and positive predictive value of SPECT and ultrasonography with 99% confidence intervals (CIs) by pooling available data. Data analysis was performed using Meta-DiSc software (version 1.4). Results: Among 188 studies and after deletion of duplicated studies (75), a total of 113 titles and abstracts were studied. From these, 12 studies were selected. The meta-analysis determined a pooled sensitivity for scintigraphy of 83% [99% confidence interval (CI) 96.358 -97.412] and for ultra-sonography of 80% [99% confidence interval (CI) 76-83]. Similar results for specificity were also obtained for both approache. Conclusion: According this meta- analysis, there were no significant differences between the two methods in terms of sensitivity and specificity. There were overlaps in 99% confidence intervals. Also features of the two methods are similar. Creative Commons Attribution License

  19. Financial analysis of technology acquisition using fractionated lasers as a model.

    PubMed

    Jutkowitz, Eric; Carniol, Paul J; Carniol, Alan R

    2010-08-01

    Ablative fractional lasers are among the most advanced and costly devices on the market. Yet, there is a dearth of published literature on the cost and potential return on investment (ROI) of such devices. The objective of this study was to provide a methodological framework for physicians to evaluate ROI. To facilitate this analysis, we conducted a case study on the potential ROI of eight ablative fractional lasers. In the base case analysis, a 5-year lease and a 3-year lease were assumed as the purchase option with a $0 down payment and 3-month payment deferral. In addition to lease payments, service contracts, labor cost, and disposables were included in the total cost estimate. Revenue was estimated as price per procedure multiplied by total number of procedures in a year. Sensitivity analyses were performed to account for variability in model assumptions. Based on the assumptions of the model, all lasers had higher ROI under the 5-year lease agreement compared with that for the 3-year lease agreement. When comparing results between lasers, those with lower operating and purchase cost delivered a higher ROI. Sensitivity analysis indicates the model is most sensitive to purchase method. If physicians opt to purchase the device rather than lease, they can significantly enhance ROI. ROI analysis is an important tool for physicians who are considering making an expensive device acquisition. However, physicians should not rely solely on ROI and must also consider the clinical benefits of a laser. (c) Thieme Medical Publishers.

  20. Uncertainty and Sensitivity Analysis of Afterbody Radiative Heating Predictions for Earth Entry

    NASA Technical Reports Server (NTRS)

    West, Thomas K., IV; Johnston, Christopher O.; Hosder, Serhat

    2016-01-01

    The objective of this work was to perform sensitivity analysis and uncertainty quantification for afterbody radiative heating predictions of Stardust capsule during Earth entry at peak afterbody radiation conditions. The radiation environment in the afterbody region poses significant challenges for accurate uncertainty quantification and sensitivity analysis due to the complexity of the flow physics, computational cost, and large number of un-certain variables. In this study, first a sparse collocation non-intrusive polynomial chaos approach along with global non-linear sensitivity analysis was used to identify the most significant uncertain variables and reduce the dimensions of the stochastic problem. Then, a total order stochastic expansion was constructed over only the important parameters for an efficient and accurate estimate of the uncertainty in radiation. Based on previous work, 388 uncertain parameters were considered in the radiation model, which came from the thermodynamics, flow field chemistry, and radiation modeling. The sensitivity analysis showed that only four of these variables contributed significantly to afterbody radiation uncertainty, accounting for almost 95% of the uncertainty. These included the electronic- impact excitation rate for N between level 2 and level 5 and rates of three chemical reactions in uencing N, N(+), O, and O(+) number densities in the flow field.

  1. The effects of physical activity on impulsive choice: Influence of sensitivity to reinforcement amount and delay

    PubMed Central

    Strickland, Justin C.; Feinstein, Max A.; Lacy, Ryan T.; Smith, Mark A.

    2016-01-01

    Impulsive choice is a diagnostic feature and/or complicating factor for several psychological disorders and may be examined in the laboratory using delay-discounting procedures. Recent investigators have proposed using quantitative measures of analysis to examine the behavioral processes contributing to impulsive choice. The purpose of this study was to examine the effects of physical activity (i.e., wheel running) on impulsive choice in a single-response, discrete-trial procedure using two quantitative methods of analysis. To this end, rats were assigned to physical activity or sedentary groups and trained to respond in a delay-discounting procedure. In this procedure, one lever always produced one food pellet immediately, whereas a second lever produced three food pellets after a 0, 10, 20, 40, or 80-second delay. Estimates of sensitivity to reinforcement amount and sensitivity to reinforcement delay were determined using (1) a simple linear analysis and (2) an analysis of logarithmically transformed response ratios. Both analyses revealed that physical activity decreased sensitivity to reinforcement amount and sensitivity to reinforcement delay. These findings indicate that (1) physical activity has significant but functionally opposing effects on the behavioral processes that contribute to impulsive choice and (2) both quantitative methods of analysis are appropriate for use in single-response, discrete-trial procedures. PMID:26964905

  2. Age estimation of burbot using pectoral fin rays, brachiostegal rays, and otoliths

    USGS Publications Warehouse

    Klein, Zachary B.; Terrazas, Marc M.; Quist, Michael C.

    2014-01-01

    Throughout much of its native distribution, burbot (Lota lota) is a species of conservation concern. Understanding dynamic rate functions is critical for the effective management of sensitive burbot populations, which necessitates accurate and precise age estimates. Managing sensitive burbot populations requires an accurate and precise non-lethal alternative. In an effort to identify a non-lethal ageing structure, we compared the precision of age estimates obtained from otoliths, pectoral fin rays, dorsal fin rays and branchiostegal rays from 208 burbot collected from the Green River drainage, Wyoming. Additionally, we compared the accuracy of age estimates from pectoral fin rays, dorsal fin rays and branchiostegal rays to those of otoliths. Dorsal fin rays were immediately deemed a poor ageing structure and removed from further analysis. Age-bias plots of consensus ages derived from branchiostegal rays and pectoral fin rays were appreciably different from those obtained from otoliths. Exact agreement between readers and reader confidence was highest for otoliths and lowest for branchiostegal rays. Age-bias plots indicated that age estimates obtained from branchiostegal rays and pectoral fin rays were substantially different from age estimates obtained from otoliths. Our results indicate that otoliths provide the most precise age estimates for burbot.

  3. Sensitivity Analysis of Launch Vehicle Debris Risk Model

    NASA Technical Reports Server (NTRS)

    Gee, Ken; Lawrence, Scott L.

    2010-01-01

    As part of an analysis of the loss of crew risk associated with an ascent abort system for a manned launch vehicle, a model was developed to predict the impact risk of the debris resulting from an explosion of the launch vehicle on the crew module. The model consisted of a debris catalog describing the number, size and imparted velocity of each piece of debris, a method to compute the trajectories of the debris and a method to calculate the impact risk given the abort trajectory of the crew module. The model provided a point estimate of the strike probability as a function of the debris catalog, the time of abort and the delay time between the abort and destruction of the launch vehicle. A study was conducted to determine the sensitivity of the strike probability to the various model input parameters and to develop a response surface model for use in the sensitivity analysis of the overall ascent abort risk model. The results of the sensitivity analysis and the response surface model are presented in this paper.

  4. On the robustness of a Bayes estimate. [in reliability theory

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1974-01-01

    This paper examines the robustness of a Bayes estimator with respect to the assigned prior distribution. A Bayesian analysis for a stochastic scale parameter of a Weibull failure model is summarized in which the natural conjugate is assigned as the prior distribution of the random parameter. The sensitivity analysis is carried out by the Monte Carlo method in which, although an inverted gamma is the assigned prior, realizations are generated using distribution functions of varying shape. For several distributional forms and even for some fixed values of the parameter, simulated mean squared errors of Bayes and minimum variance unbiased estimators are determined and compared. Results indicate that the Bayes estimator remains squared-error superior and appears to be largely robust to the form of the assigned prior distribution.

  5. Application of Diffusion Tensor Imaging Parameters to Detect Change in Longitudinal Studies in Cerebral Small Vessel Disease.

    PubMed

    Zeestraten, Eva Anna; Benjamin, Philip; Lambert, Christian; Lawrence, Andrew John; Williams, Owen Alan; Morris, Robin Guy; Barrick, Thomas Richard; Markus, Hugh Stephen

    2016-01-01

    Cerebral small vessel disease (SVD) is the major cause of vascular cognitive impairment, resulting in significant disability and reduced quality of life. Cognitive tests have been shown to be insensitive to change in longitudinal studies and, therefore, sensitive surrogate markers are needed to monitor disease progression and assess treatment effects in clinical trials. Diffusion tensor imaging (DTI) is thought to offer great potential in this regard. Sensitivity of the various parameters that can be derived from DTI is however unknown. We aimed to evaluate the differential sensitivity of DTI markers to detect SVD progression, and to estimate sample sizes required to assess therapeutic interventions aimed at halting decline based on DTI data. We investigated 99 patients with symptomatic SVD, defined as clinical lacunar syndrome with MRI confirmation of a corresponding infarct as well as confluent white matter hyperintensities over a 3 year follow-up period. We evaluated change in DTI histogram parameters using linear mixed effect models and calculated sample size estimates. Over a three-year follow-up period we observed a decline in fractional anisotropy and increase in diffusivity in white matter tissue and most parameters changed significantly. Mean diffusivity peak height was the most sensitive marker for SVD progression as it had the smallest sample size estimate. This suggests disease progression can be monitored sensitively using DTI histogram analysis and confirms DTI's potential as surrogate marker for SVD.

  6. Health economic comparison of SLIT allergen and SCIT allergoid immunotherapy in patients with seasonal grass-allergic rhinoconjunctivitis in Germany.

    PubMed

    Verheggen, Bram G; Westerhout, Kirsten Y; Schreder, Carl H; Augustin, Matthias

    2015-01-01

    Allergoids are chemically modified allergen extracts administered to reduce allergenicity and to maintain immunogenicity. Oralair® (the 5-grass tablet) is a sublingual native grass allergen tablet for pre- and co-seasonal treatment. Based on a literature review, meta-analysis, and cost-effectiveness analysis the relative effects and costs of the 5-grass tablet versus a mix of subcutaneous allergoid compounds for grass pollen allergic rhinoconjunctivitis were assessed. A Markov model with a time horizon of nine years was used to assess the costs and effects of three-year immunotherapy treatment. Relative efficacy expressed as standardized mean differences was estimated using an indirect comparison on symptom scores extracted from available clinical trials. The Rhinitis Symptom Utility Index (RSUI) was applied as a proxy to estimate utility values for symptom scores. Drug acquisition and other medical costs were derived from published sources as well as estimates for resource use, immunotherapy persistence, and occurrence of asthma. The analysis was executed from the German payer's perspective, which includes payments of the Statutory Health Insurance (SHI) and additional payments by insurants. Comprehensive deterministic and probabilistic sensitivity analyses and different scenarios were performed to test the uncertainty concerning the incremental model outcomes. The applied model predicted a cost-utility ratio of the 5-grass tablet versus a market mix of injectable allergoid products of € 12,593 per QALY in the base case analysis. Predicted incremental costs and QALYs were € 458 (95% confidence interval, CI: € 220; € 739) and 0.036 (95% CI: 0.002; 0.078), respectively. Compared to the allergoid mix the probability of the 5-grass tablet being the most cost-effective treatment option was predicted to be 76% at a willingness-to-pay threshold of € 20,000. The results were most sensitive to changes in efficacy estimates, duration of the pollen season, and immunotherapy persistence rates. This analysis suggests the sublingual native 5-grass tablet to be cost-effective relative to a mix of subcutaneous allergoid compounds. The robustness of these statements has been confirmed in extensive sensitivity and scenario analyses.

  7. Cost-effectiveness analysis of triple therapy with protease inhibitors in treatment-naive hepatitis C patients.

    PubMed

    Blázquez-Pérez, Antonio; San Miguel, Ramón; Mar, Javier

    2013-10-01

    Chronic hepatitis C is the leading cause of chronic liver disease, representing a significant burden in terms of morbidity, mortality and costs. A new scenario of therapy for hepatitis C virus (HCV) genotype 1 infection is being established with the approval of two effective HCV protease inhibitors (PIs) in combination with the standard of care (SOC), peginterferon and ribavirin. Our objective was to estimate the cost effectiveness of combination therapy with new PIs (boceprevir and telaprevir) plus peginterferon and ribavirin versus SOC in treatment-naive patients with HCV genotype 1 according to data obtained from clinical trials (CTs). A Markov model simulating chronic HCV progression was used to estimate disease treatment costs and effects over patients' lifetimes, in the Spanish national public healthcare system. The target population was treatment-naive patients with chronic HCV genotype 1, demographic characteristics for whom were obtained from the published pivotal CTs SPRINT and ADVANCE. Three options were analysed for each PI based on results from the two CTs: universal triple therapy, interleukin (IL)-28B-guided therapy and dual therapy with peginterferon and ribavirin. A univariate sensitivity analysis was performed to evaluate the uncertainty of certain parameters: age at start of treatment, transition probabilities, drug costs, CT efficacy results and a higher hazard ratio for all-cause mortality for patients with chronic HCV. Probabilistic sensitivity analyses were also carried out. Incremental cost-effectiveness ratios (ICERs) of €2012 per quality-adjusted life-year (QALY) gained were used as outcome measures. According to the base-case analysis, using dual therapy as the comparator, the alternative IL28B-guided therapy presents a more favorable ICER (€18,079/QALY for boceprevir and €25,914/QALY for telaprevir) than the universal triple therapy option (€27,594/QALY for boceprevir and €33,751/QALY for telaprevir), with an ICER clearly below the efficiency threshold for medical interventions in the Spanish setting. Sensitivity analysis showed that age at the beginning of treatment was an important factor that influenced the ICER. A potential reduction in PI costs would also clearly improve the ICER, and transition probabilities influenced the results, but to a lesser extent. Probabilistic sensitivity analyses showed that 95 % of the simulations presented an ICER below €40,000/QALY. Post hoc estimations of sustained virological responses of the IL28B-guided therapeutic option represented a limitation of the study. The therapeutic options analysed for the base-case cohort can be considered cost-effective interventions for the Spanish healthcare framework. Sensitivity analysis estimated an acceptability threshold of the IL28B-guided strategy of patients younger than 60 years.

  8. Diagnostic accuracy of spot urinary protein and albumin to creatinine ratios for detection of significant proteinuria or adverse pregnancy outcome in patients with suspected pre-eclampsia: systematic review and meta-analysis

    PubMed Central

    Morris, R K; Riley, R D; Doug, M; Deeks, J J

    2012-01-01

    Objective To determine the diagnostic accuracy of two “spot urine” tests for significant proteinuria or adverse pregnancy outcome in pregnant women with suspected pre-eclampsia. Design Systematic review and meta-analysis. Data sources Searches of electronic databases 1980 to January 2011, reference list checking, hand searching of journals, and contact with experts. Inclusion criteria Diagnostic studies, in pregnant women with hypertension, that compared the urinary spot protein to creatinine ratio or albumin to creatinine ratio with urinary protein excretion over 24 hours or adverse pregnancy outcome. Study characteristics, design, and methodological and reporting quality were objectively assessed. Data extraction Study results relating to diagnostic accuracy were extracted and synthesised using multivariate random effects meta-analysis methods. Results Twenty studies, testing 2978 women (pregnancies), were included. Thirteen studies examining protein to creatinine ratio for the detection of significant proteinuria were included in the multivariate analysis. Threshold values for protein to creatinine ratio ranged between 0.13 and 0.5, with estimates of sensitivity ranging from 0.65 to 0.89 and estimates of specificity from 0.63 to 0.87; the area under the summary receiver operating characteristics curve was 0.69. On average, across all studies, the optimum threshold (that optimises sensitivity and specificity combined) seems to be between 0.30 and 0.35 inclusive. However, no threshold gave a summary estimate above 80% for both sensitivity and specificity, and considerable heterogeneity existed in diagnostic accuracy across studies at most thresholds. No studies looked at protein to creatinine ratio and adverse pregnancy outcome. For albumin to creatinine ratio, meta-analysis was not possible. Results from a single study suggested that the most predictive result, for significant proteinuria, was with the DCA 2000 quantitative analyser (>2 mg/mmol) with a summary sensitivity of 0.94 (95% confidence interval 0.86 to 0.98) and a specificity of 0.94 (0.87 to 0.98). In a single study of adverse pregnancy outcome, results for perinatal death were a sensitivity of 0.82 (0.48 to 0.98) and a specificity of 0.59 (0.51 to 0.67). Conclusion The maternal “spot urine” estimate of protein to creatinine ratio shows promising diagnostic value for significant proteinuria in suspected pre-eclampsia. The existing evidence is not, however, sufficient to determine how protein to creatinine ratio should be used in clinical practice, owing to the heterogeneity in test accuracy and prevalence across studies. Insufficient evidence is available on the use of albumin to creatinine ratio in this area. Insufficient evidence exists for either test to predict adverse pregnancy outcome. PMID:22777026

  9. On Distributed PV Hosting Capacity Estimation, Sensitivity Study, and Improvement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Fei; Mather, Barry

    This paper first studies the estimated distributed PV hosting capacities of seventeen utility distribution feeders using the Monte Carlo simulation based stochastic analysis, and then analyzes the sensitivity of PV hosting capacity to both feeder and photovoltaic system characteristics. Furthermore, an active distribution network management approach is proposed to maximize PV hosting capacity by optimally switching capacitors, adjusting voltage regulator taps, managing controllable branch switches and controlling smart PV inverters. The approach is formulated as a mixed-integer nonlinear optimization problem and a genetic algorithm is developed to obtain the solution. Multiple simulation cases are studied and the effectiveness of themore » proposed approach on increasing PV hosting capacity is demonstrated.« less

  10. A novel Bayesian approach to accounting for uncertainty in fMRI-derived estimates of cerebral oxygen metabolism fluctuations

    PubMed Central

    Simon, Aaron B.; Dubowitz, David J.; Blockley, Nicholas P.; Buxton, Richard B.

    2016-01-01

    Calibrated blood oxygenation level dependent (BOLD) imaging is a multimodal functional MRI technique designed to estimate changes in cerebral oxygen metabolism from measured changes in cerebral blood flow and the BOLD signal. This technique addresses fundamental ambiguities associated with quantitative BOLD signal analysis; however, its dependence on biophysical modeling creates uncertainty in the resulting oxygen metabolism estimates. In this work, we developed a Bayesian approach to estimating the oxygen metabolism response to a neural stimulus and used it to examine the uncertainty that arises in calibrated BOLD estimation due to the presence of unmeasured model parameters. We applied our approach to estimate the CMRO2 response to a visual task using the traditional hypercapnia calibration experiment as well as to estimate the metabolic response to both a visual task and hypercapnia using the measurement of baseline apparent R2′ as a calibration technique. Further, in order to examine the effects of cerebral spinal fluid (CSF) signal contamination on the measurement of apparent R2′, we examined the effects of measuring this parameter with and without CSF-nulling. We found that the two calibration techniques provided consistent estimates of the metabolic response on average, with a median R2′-based estimate of the metabolic response to CO2 of 1.4%, and R2′- and hypercapnia-calibrated estimates of the visual response of 27% and 24%, respectively. However, these estimates were sensitive to different sources of estimation uncertainty. The R2′-calibrated estimate was highly sensitive to CSF contamination and to uncertainty in unmeasured model parameters describing flow-volume coupling, capillary bed characteristics, and the iso-susceptibility saturation of blood. The hypercapnia-calibrated estimate was relatively insensitive to these parameters but highly sensitive to the assumed metabolic response to CO2. PMID:26790354

  11. A novel Bayesian approach to accounting for uncertainty in fMRI-derived estimates of cerebral oxygen metabolism fluctuations.

    PubMed

    Simon, Aaron B; Dubowitz, David J; Blockley, Nicholas P; Buxton, Richard B

    2016-04-01

    Calibrated blood oxygenation level dependent (BOLD) imaging is a multimodal functional MRI technique designed to estimate changes in cerebral oxygen metabolism from measured changes in cerebral blood flow and the BOLD signal. This technique addresses fundamental ambiguities associated with quantitative BOLD signal analysis; however, its dependence on biophysical modeling creates uncertainty in the resulting oxygen metabolism estimates. In this work, we developed a Bayesian approach to estimating the oxygen metabolism response to a neural stimulus and used it to examine the uncertainty that arises in calibrated BOLD estimation due to the presence of unmeasured model parameters. We applied our approach to estimate the CMRO2 response to a visual task using the traditional hypercapnia calibration experiment as well as to estimate the metabolic response to both a visual task and hypercapnia using the measurement of baseline apparent R2' as a calibration technique. Further, in order to examine the effects of cerebral spinal fluid (CSF) signal contamination on the measurement of apparent R2', we examined the effects of measuring this parameter with and without CSF-nulling. We found that the two calibration techniques provided consistent estimates of the metabolic response on average, with a median R2'-based estimate of the metabolic response to CO2 of 1.4%, and R2'- and hypercapnia-calibrated estimates of the visual response of 27% and 24%, respectively. However, these estimates were sensitive to different sources of estimation uncertainty. The R2'-calibrated estimate was highly sensitive to CSF contamination and to uncertainty in unmeasured model parameters describing flow-volume coupling, capillary bed characteristics, and the iso-susceptibility saturation of blood. The hypercapnia-calibrated estimate was relatively insensitive to these parameters but highly sensitive to the assumed metabolic response to CO2. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Economics in "Global Health 2035": a sensitivity analysis of the value of a life year estimates.

    PubMed

    Chang, Angela Y; Robinson, Lisa A; Hammitt, James K; Resch, Stephen C

    2017-06-01

    In "Global health 2035: a world converging within a generation," The Lancet Commission on Investing in Health (CIH) adds the value of increased life expectancy to the value of growth in gross domestic product (GDP) when assessing national well-being. To value changes in life expectancy, the CIH relies on several strong assumptions to bridge gaps in the empirical research. It finds that the value of a life year (VLY) averages 2.3 times GDP per capita for low- and middle-income countries (LMICs) assuming the changes in life expectancy they experienced from 2000 to 2011 are permanent. The CIH VLY estimate is based on a specific shift in population life expectancy and includes a 50 percent reduction for children ages 0 through 4. We investigate the sensitivity of this estimate to the underlying assumptions, including the effects of income, age, and life expectancy, and the sequencing of the calculations. We find that reasonable alternative assumptions regarding the effects of income, age, and life expectancy may reduce the VLY estimates to 0.2 to 2.1 times GDP per capita for LMICs. Removing the reduction for young children increases the VLY, while reversing the sequencing of the calculations reduces the VLY. Because the VLY is sensitive to the underlying assumptions, analysts interested in applying this approach elsewhere must tailor the estimates to the impacts of the intervention and the characteristics of the affected population. Analysts should test the sensitivity of their conclusions to reasonable alternative assumptions. More work is needed to investigate options for improving the approach.

  13. Sobol Sensitivity Analysis: A Tool to Guide the Development and Evaluation of Systems Pharmacology Models

    PubMed Central

    Trame, MN; Lesko, LJ

    2015-01-01

    A systems pharmacology model typically integrates pharmacokinetic, biochemical network, and systems biology concepts into a unifying approach. It typically consists of a large number of parameters and reaction species that are interlinked based upon the underlying (patho)physiology and the mechanism of drug action. The more complex these models are, the greater the challenge of reliably identifying and estimating respective model parameters. Global sensitivity analysis provides an innovative tool that can meet this challenge. CPT Pharmacometrics Syst. Pharmacol. (2015) 4, 69–79; doi:10.1002/psp4.6; published online 25 February 2015 PMID:27548289

  14. Management of Malignant Pleural Effusion: A Cost-Utility Analysis.

    PubMed

    Shafiq, Majid; Frick, Kevin D; Lee, Hans; Yarmus, Lonny; Feller-Kopman, David J

    2015-07-01

    Malignant pleural effusion (MPE) is associated with a significant impact on health-related quality of life. Palliative interventions abound, with varying costs and degrees of invasiveness. We examined the relative cost-utility of 5 therapeutic alternatives for MPE among adults. Original studies investigating the management of MPE were extensively researched, and the most robust and current data particularly those from the TIME2 trial were chosen to estimate event probabilities. Medicare data were used for cost estimation. Utility estimates were adapted from 2 original studies and kept consistent with prior estimations. The decision tree model was based on clinical guidelines and authors' consensus opinion. Primary outcome of interest was the incremental cost-effectiveness ratio for each intervention over a less effective alternative over an analytical horizon of 6 months. Given the paucity of data on rapid pleurodesis protocol, a sensitivity analysis was conducted to address the uncertainty surrounding its efficacy in terms of achieving long-term pleurodesis. Except for repeated thoracentesis (RT; least effective), all interventions had similar effectiveness. Tunneled pleural catheter was the most cost-effective option with an incremental cost-effectiveness ratio of $45,747 per QALY gained over RT, assuming a willingness-to-pay threshold of $100,000/QALY. Multivariate sensitivity analysis showed that rapid pleurodesis protocol remained cost-ineffective even with an estimated probability of lasting pleurodesis up to 85%. Tunneled pleural catheter is the most cost-effective therapeutic alternative to RT. This, together with its relative convenience (requiring neither hospitalization nor thoracoscopic procedural skills), makes it an intervention of choice for MPE.

  15. Robust detection, isolation and accommodation for sensor failures

    NASA Technical Reports Server (NTRS)

    Emami-Naeini, A.; Akhter, M. M.; Rock, S. M.

    1986-01-01

    The objective is to extend the recent advances in robust control system design of multivariable systems to sensor failure detection, isolation, and accommodation (DIA), and estimator design. This effort provides analysis tools to quantify the trade-off between performance robustness and DIA sensitivity, which are to be used to achieve higher levels of performance robustness for given levels of DIA sensitivity. An innovations-based DIA scheme is used. Estimators, which depend upon a model of the process and process inputs and outputs, are used to generate these innovations. Thresholds used to determine failure detection are computed based on bounds on modeling errors, noise properties, and the class of failures. The applicability of the newly developed tools are demonstrated on a multivariable aircraft turbojet engine example. A new concept call the threshold selector was developed. It represents a significant and innovative tool for the analysis and synthesis of DiA algorithms. The estimators were made robust by introduction of an internal model and by frequency shaping. The internal mode provides asymptotically unbiased filter estimates.The incorporation of frequency shaping of the Linear Quadratic Gaussian cost functional modifies the estimator design to make it suitable for sensor failure DIA. The results are compared with previous studies which used thresholds that were selcted empirically. Comparison of these two techniques on a nonlinear dynamic engine simulation shows improved performance of the new method compared to previous techniques

  16. Impact of covariate models on the assessment of the air pollution-mortality association in a single- and multipollutant context.

    PubMed

    Sacks, Jason D; Ito, Kazuhiko; Wilson, William E; Neas, Lucas M

    2012-10-01

    With the advent of multicity studies, uniform statistical approaches have been developed to examine air pollution-mortality associations across cities. To assess the sensitivity of the air pollution-mortality association to different model specifications in a single and multipollutant context, the authors applied various regression models developed in previous multicity time-series studies of air pollution and mortality to data from Philadelphia, Pennsylvania (May 1992-September 1995). Single-pollutant analyses used daily cardiovascular mortality, fine particulate matter (particles with an aerodynamic diameter ≤2.5 µm; PM(2.5)), speciated PM(2.5), and gaseous pollutant data, while multipollutant analyses used source factors identified through principal component analysis. In single-pollutant analyses, risk estimates were relatively consistent across models for most PM(2.5) components and gaseous pollutants. However, risk estimates were inconsistent for ozone in all-year and warm-season analyses. Principal component analysis yielded factors with species associated with traffic, crustal material, residual oil, and coal. Risk estimates for these factors exhibited less sensitivity to alternative regression models compared with single-pollutant models. Factors associated with traffic and crustal material showed consistently positive associations in the warm season, while the coal combustion factor showed consistently positive associations in the cold season. Overall, mortality risk estimates examined using a source-oriented approach yielded more stable and precise risk estimates, compared with single-pollutant analyses.

  17. Evaluation of the Environmental DNA Method for Estimating Distribution and Biomass of Submerged Aquatic Plants

    PubMed Central

    Matsuhashi, Saeko; Doi, Hideyuki; Fujiwara, Ayaka; Watanabe, Sonoko; Minamoto, Toshifumi

    2016-01-01

    The environmental DNA (eDNA) method has increasingly been recognized as a powerful tool for monitoring aquatic animal species; however, its application for monitoring aquatic plants is limited. To evaluate eDNA analysis for estimating the distribution of aquatic plants, we compared its estimated distributions with eDNA analysis, visual observation, and past distribution records for the submerged species Hydrilla verticillata. Moreover, we conducted aquarium experiments using H. verticillata and Egeria densa and analyzed the relationships between eDNA concentrations and plant biomass to investigate the potential for biomass estimation. The occurrences estimated by eDNA analysis closely corresponded to past distribution records, and eDNA detections were more frequent than visual observations, indicating that the method is potentially more sensitive. The results of the aquarium experiments showed a positive relationship between plant biomass and eDNA concentration; however, the relationship was not always significant. The eDNA concentration peaked within three days of the start of the experiment in most cases, suggesting that plants do not release constant amounts of DNA. These results showed that eDNA analysis can be used for distribution surveys, and has the potential to estimate the biomass of aquatic plants. PMID:27304876

  18. Evaluation of the Environmental DNA Method for Estimating Distribution and Biomass of Submerged Aquatic Plants.

    PubMed

    Matsuhashi, Saeko; Doi, Hideyuki; Fujiwara, Ayaka; Watanabe, Sonoko; Minamoto, Toshifumi

    2016-01-01

    The environmental DNA (eDNA) method has increasingly been recognized as a powerful tool for monitoring aquatic animal species; however, its application for monitoring aquatic plants is limited. To evaluate eDNA analysis for estimating the distribution of aquatic plants, we compared its estimated distributions with eDNA analysis, visual observation, and past distribution records for the submerged species Hydrilla verticillata. Moreover, we conducted aquarium experiments using H. verticillata and Egeria densa and analyzed the relationships between eDNA concentrations and plant biomass to investigate the potential for biomass estimation. The occurrences estimated by eDNA analysis closely corresponded to past distribution records, and eDNA detections were more frequent than visual observations, indicating that the method is potentially more sensitive. The results of the aquarium experiments showed a positive relationship between plant biomass and eDNA concentration; however, the relationship was not always significant. The eDNA concentration peaked within three days of the start of the experiment in most cases, suggesting that plants do not release constant amounts of DNA. These results showed that eDNA analysis can be used for distribution surveys, and has the potential to estimate the biomass of aquatic plants.

  19. Costs of cervical cancer screening and treatment using visual inspection with acetic acid (VIA) and cryotherapy in Ghana: the importance of scale

    PubMed Central

    Quentin, Wilm; Adu-Sarkodie, Yaw; Terris-Prestholt, Fern; Legood, Rosa; Opoku, Baafuor K; Mayaud, Philippe

    2011-01-01

    Objectives To estimate the incremental costs of visual inspection with acetic acid (VIA) and cryotherapy at cervical cancer screening facilities in Ghana; to explore determinants of costs through modelling; and to estimate national scale-up and annual programme costs. Methods Resource-use data were collected at four out of six active VIA screening centres, and unit costs were ascertained to estimate the costs per woman of VIA and cryotherapy. Modelling and sensitivity analysis were used to explore the influence of observed differences between screening facilities on estimated costs and to calculate national costs. Results Incremental economic costs per woman screened with VIA ranged from 4.93 US$ to 14.75 US$, and costs of cryotherapy were between 47.26 US$ and 84.48 US$ at surveyed facilities. Under base case assumptions, our model estimated the costs of VIA to be 6.12 US$ per woman and those of cryotherapy to be 27.96 US$. Sensitivity analysis showed that the number of women screened per provider and treated per facility was the most important determinants of costs. National annual programme costs were estimated to be between 0.6 and 4.0 million US$ depending on assumed coverage and adopted screening strategy. Conclusion When choosing between different cervical cancer prevention strategies, the feasibility of increasing uptake to achieve economies of scale should be a major concern. PMID:21214692

  20. Costs of cervical cancer screening and treatment using visual inspection with acetic acid (VIA) and cryotherapy in Ghana: the importance of scale.

    PubMed

    Quentin, Wilm; Adu-Sarkodie, Yaw; Terris-Prestholt, Fern; Legood, Rosa; Opoku, Baafuor K; Mayaud, Philippe

    2011-03-01

    To estimate the incremental costs of visual inspection with acetic acid (VIA) and cryotherapy at cervical cancer screening facilities in Ghana; to explore determinants of costs through modelling; and to estimate national scale-up and annual programme costs. Resource-use data were collected at four out of six active VIA screening centres, and unit costs were ascertained to estimate the costs per woman of VIA and cryotherapy. Modelling and sensitivity analysis were used to explore the influence of observed differences between screening facilities on estimated costs and to calculate national costs. Incremental economic costs per woman screened with VIA ranged from 4.93 US$ to 14.75 US$, and costs of cryotherapy were between 47.26 US$ and 84.48 US$ at surveyed facilities. Under base case assumptions, our model estimated the costs of VIA to be 6.12 US$ per woman and those of cryotherapy to be 27.96 US$. Sensitivity analysis showed that the number of women screened per provider and treated per facility was the most important determinants of costs. National annual programme costs were estimated to be between 0.6 and 4.0 million US$ depending on assumed coverage and adopted screening strategy.   When choosing between different cervical cancer prevention strategies, the feasibility of increasing uptake to achieve economies of scale should be a major concern. © 2011 Blackwell Publishing Ltd.

  1. Mendelian randomization with fine-mapped genetic data: Choosing from large numbers of correlated instrumental variables.

    PubMed

    Burgess, Stephen; Zuber, Verena; Valdes-Marquez, Elsa; Sun, Benjamin B; Hopewell, Jemma C

    2017-12-01

    Mendelian randomization uses genetic variants to make causal inferences about the effect of a risk factor on an outcome. With fine-mapped genetic data, there may be hundreds of genetic variants in a single gene region any of which could be used to assess this causal relationship. However, using too many genetic variants in the analysis can lead to spurious estimates and inflated Type 1 error rates. But if only a few genetic variants are used, then the majority of the data is ignored and estimates are highly sensitive to the particular choice of variants. We propose an approach based on summarized data only (genetic association and correlation estimates) that uses principal components analysis to form instruments. This approach has desirable theoretical properties: it takes the totality of data into account and does not suffer from numerical instabilities. It also has good properties in simulation studies: it is not particularly sensitive to varying the genetic variants included in the analysis or the genetic correlation matrix, and it does not have greatly inflated Type 1 error rates. Overall, the method gives estimates that are less precise than those from variable selection approaches (such as using a conditional analysis or pruning approach to select variants), but are more robust to seemingly arbitrary choices in the variable selection step. Methods are illustrated by an example using genetic associations with testosterone for 320 genetic variants to assess the effect of sex hormone related pathways on coronary artery disease risk, in which variable selection approaches give inconsistent inferences. © 2017 The Authors Genetic Epidemiology Published by Wiley Periodicals, Inc.

  2. Sensitivity Analysis of Multiple Informant Models When Data Are Not Missing at Random

    ERIC Educational Resources Information Center

    Blozis, Shelley A.; Ge, Xiaojia; Xu, Shu; Natsuaki, Misaki N.; Shaw, Daniel S.; Neiderhiser, Jenae M.; Scaramella, Laura V.; Leve, Leslie D.; Reiss, David

    2013-01-01

    Missing data are common in studies that rely on multiple informant data to evaluate relationships among variables for distinguishable individuals clustered within groups. Estimation of structural equation models using raw data allows for incomplete data, and so all groups can be retained for analysis even if only 1 member of a group contributes…

  3. Simplified methods for evaluating road prism stability

    Treesearch

    William J. Elliot; Mark Ballerini; David Hall

    2003-01-01

    Mass failure is one of the most common failures of low-volume roads in mountainous terrain. Current methods for evaluating stability of these roads require a geotechnical specialist. A stability analysis program, XSTABL, was used to estimate the stability of 3,696 combinations of road geometry, soil, and groundwater conditions. A sensitivity analysis was carried out to...

  4. Estimation of the Young's modulus of the human pars tensa using in-situ pressurization and inverse finite-element analysis.

    PubMed

    Rohani, S Alireza; Ghomashchi, Soroush; Agrawal, Sumit K; Ladak, Hanif M

    2017-03-01

    Finite-element models of the tympanic membrane are sensitive to the Young's modulus of the pars tensa. The aim of this work is to estimate the Young's modulus under a different experimental paradigm than currently used on the human tympanic membrane. These additional values could potentially be used by the auditory biomechanics community for building consensus. The Young's modulus of the human pars tensa was estimated through inverse finite-element modelling of an in-situ pressurization experiment. The experiments were performed on three specimens with a custom-built pressurization unit at a quasi-static pressure of 500 Pa. The shape of each tympanic membrane before and after pressurization was recorded using a Fourier transform profilometer. The samples were also imaged using micro-computed tomography to create sample-specific finite-element models. For each sample, the Young's modulus was then estimated by numerically optimizing its value in the finite-element model so simulated pressurized shapes matched experimental data. The estimated Young's modulus values were 2.2 MPa, 2.4 MPa and 2.0 MPa, and are similar to estimates obtained using in-situ single-point indentation testing. The estimates were obtained under the assumptions that the pars tensa is linearly elastic, uniform, isotropic with a thickness of 110 μm, and the estimates are limited to quasi-static loading. Estimates of pars tensa Young's modulus are sensitive to its thickness and inclusion of the manubrial fold. However, they do not appear to be sensitive to optimization initialization, height measurement error, pars flaccida Young's modulus, and tympanic membrane element type (shell versus solid). Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Estimation of Handgrip Force from SEMG Based on Wavelet Scale Selection.

    PubMed

    Wang, Kai; Zhang, Xianmin; Ota, Jun; Huang, Yanjiang

    2018-02-24

    This paper proposes a nonlinear correlation-based wavelet scale selection technology to select the effective wavelet scales for the estimation of handgrip force from surface electromyograms (SEMG). The SEMG signal corresponding to gripping force was collected from extensor and flexor forearm muscles during the force-varying analysis task. We performed a computational sensitivity analysis on the initial nonlinear SEMG-handgrip force model. To explore the nonlinear correlation between ten wavelet scales and handgrip force, a large-scale iteration based on the Monte Carlo simulation was conducted. To choose a suitable combination of scales, we proposed a rule to combine wavelet scales based on the sensitivity of each scale and selected the appropriate combination of wavelet scales based on sequence combination analysis (SCA). The results of SCA indicated that the scale combination VI is suitable for estimating force from the extensors and the combination V is suitable for the flexors. The proposed method was compared to two former methods through prolonged static and force-varying contraction tasks. The experiment results showed that the root mean square errors derived by the proposed method for both static and force-varying contraction tasks were less than 20%. The accuracy and robustness of the handgrip force derived by the proposed method is better than that obtained by the former methods.

  6. Joint Bearing and Range Estimation of Multiple Objects from Time-Frequency Analysis.

    PubMed

    Liu, Jeng-Cheng; Cheng, Yuang-Tung; Hung, Hsien-Sen

    2018-01-19

    Direction-of-arrival (DOA) and range estimation is an important issue of sonar signal processing. In this paper, a novel approach using Hilbert-Huang transform (HHT) is proposed for joint bearing and range estimation of multiple targets based on a uniform linear array (ULA) of hydrophones. The structure of this ULA based on micro-electro-mechanical systems (MEMS) technology, and thus has attractive features of small size, high sensitivity and low cost, and is suitable for Autonomous Underwater Vehicle (AUV) operations. This proposed target localization method has the following advantages: only a single snapshot of data is needed and real-time processing is feasible. The proposed algorithm transforms a very complicated nonlinear estimation problem to a simple nearly linear one via time-frequency distribution (TFD) theory and is verified with HHT. Theoretical discussions of resolution issue are also provided to facilitate the design of a MEMS sensor with high sensitivity. Simulation results are shown to verify the effectiveness of the proposed method.

  7. Evaluation of the information content of long-term wastewater characteristics data in relation to activated sludge model parameters.

    PubMed

    Alikhani, Jamal; Takacs, Imre; Al-Omari, Ahmed; Murthy, Sudhir; Massoudieh, Arash

    2017-03-01

    A parameter estimation framework was used to evaluate the ability of observed data from a full-scale nitrification-denitrification bioreactor to reduce the uncertainty associated with the bio-kinetic and stoichiometric parameters of an activated sludge model (ASM). Samples collected over a period of 150 days from the effluent as well as from the reactor tanks were used. A hybrid genetic algorithm and Bayesian inference were used to perform deterministic and parameter estimations, respectively. The main goal was to assess the ability of the data to obtain reliable parameter estimates for a modified version of the ASM. The modified ASM model includes methylotrophic processes which play the main role in methanol-fed denitrification. Sensitivity analysis was also used to explain the ability of the data to provide information about each of the parameters. The results showed that the uncertainty in the estimates of the most sensitive parameters (including growth rate, decay rate, and yield coefficients) decreased with respect to the prior information.

  8. Design, characterization, and sensitivity of the supernova trigger system at Daya Bay

    NASA Astrophysics Data System (ADS)

    Wei, Hanyu; Lebanowski, Logan; Li, Fei; Wang, Zhe; Chen, Shaomin

    2016-02-01

    Providing an early warning of galactic supernova explosions from neutrino signals is important in studying supernova dynamics and neutrino physics. A dedicated supernova trigger system has been designed and installed in the data acquisition system at Daya Bay and integrated into the worldwide Supernova Early Warning System (SNEWS). Daya Bay's unique feature of eight identically-designed detectors deployed in three separate experimental halls makes the trigger system naturally robust against cosmogenic backgrounds, enabling a prompt analysis of online triggers and a tight control of the false-alert rate. The trigger system is estimated to be fully sensitive to 1987A-type supernova bursts throughout most of the Milky Way. The significant gain in sensitivity of the eight-detector configuration over a mass-equivalent single detector is also estimated. The experience of this online trigger system is applicable to future projects with spatially distributed detectors.

  9. SCALE Continuous-Energy Eigenvalue Sensitivity Coefficient Calculations

    DOE PAGES

    Perfetti, Christopher M.; Rearden, Bradley T.; Martin, William R.

    2016-02-25

    Sensitivity coefficients describe the fractional change in a system response that is induced by changes to system parameters and nuclear data. The Tools for Sensitivity and UNcertainty Analysis Methodology Implementation (TSUNAMI) code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, including quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the developmentmore » of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Tracklength importance CHaracterization (CLUTCH) and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in the CE-KENO framework of the SCALE code system to enable TSUNAMI-3D to perform eigenvalue sensitivity calculations using continuous-energy Monte Carlo methods. This work provides a detailed description of the theory behind the CLUTCH method and describes in detail its implementation. This work explores the improvements in eigenvalue sensitivity coefficient accuracy that can be gained through the use of continuous-energy sensitivity methods and also compares several sensitivity methods in terms of computational efficiency and memory requirements.« less

  10. Analysis of the sensitivity of soils to the leaching of agricultural pesticides in Ohio

    USGS Publications Warehouse

    Schalk, C.W.

    1998-01-01

    Pesticides have not been found frequently in the ground waters of Ohio even though large amounts of agricultural pesticides are applied to fields in Ohio every year. State regulators, including representatives from Ohio Environmental Protection Agency and Departments of Agriculture, Health, and Natural Resources, are striving to limit the presence of pesticides in ground water at a minimum. A proposed pesticide management plan for the State aims at protecting Ohio's ground water by assessing pesticide-leaching potential using geographic information system (GIS) technology and invoking a monitoring plan that targets aquifers deemed most likely to be vulnerable to pesticide leaching. The U.S. Geological Survey, in cooperation with Ohio Department of Agriculture, assessed the sensitivity of mapped soil units in Ohio to pesticide leaching. A soils data base (STATSGO) compiled by U.S. Department of Agriculture was used iteratively to estimate soil units as being of high to low sensitivity on the basis of soil permeability, clay content, and organic-matter content. Although this analysis did not target aquifers directly, the results can be used as a first estimate of areas most likely to be subject to pesticide contamination from normal agricultural practices. High-sensitivity soil units were found in lakefront areas and former lakefront beach ridges, buried valleys in several river basins, and parts of central and south- central Ohio. Medium-high-sensitivity soil units were found in other river basins, along Lake Erie in north-central Ohio, and in many of the upland areas of the Muskingum River Basin. Low-sensitivity map units dominated the northwestern quadrant of Ohio.

  11. Non-ignorable missingness in logistic regression.

    PubMed

    Wang, Joanna J J; Bartlett, Mark; Ryan, Louise

    2017-08-30

    Nonresponses and missing data are common in observational studies. Ignoring or inadequately handling missing data may lead to biased parameter estimation, incorrect standard errors and, as a consequence, incorrect statistical inference and conclusions. We present a strategy for modelling non-ignorable missingness where the probability of nonresponse depends on the outcome. Using a simple case of logistic regression, we quantify the bias in regression estimates and show the observed likelihood is non-identifiable under non-ignorable missing data mechanism. We then adopt a selection model factorisation of the joint distribution as the basis for a sensitivity analysis to study changes in estimated parameters and the robustness of study conclusions against different assumptions. A Bayesian framework for model estimation is used as it provides a flexible approach for incorporating different missing data assumptions and conducting sensitivity analysis. Using simulated data, we explore the performance of the Bayesian selection model in correcting for bias in a logistic regression. We then implement our strategy using survey data from the 45 and Up Study to investigate factors associated with worsening health from the baseline to follow-up survey. Our findings have practical implications for the use of the 45 and Up Study data to answer important research questions relating to health and quality-of-life. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  12. Incidence of tuberculous meningitis in France, 2000: a capture-recapture analysis.

    PubMed

    Cailhol, J; Che, D; Jarlier, V; Decludt, B; Robert, J

    2005-07-01

    To estimate the incidence of culture-positive and culture-negative tuberculous meningitis (TBM) in France in 2000. Capture-recapture method using two unrelated sources of data: the tuberculosis (TB) mandatory notification system (MNTB), recording patients treated by anti-tuberculosis drugs, and a survey by the National Reference Centre (NRC) for mycobacterial drug resistance, recording culture-positive TBM. Of 112 cases of TBM reported to the MNTB, 28 culture-positive and 34 culture-negative meningitis cases were validated (17 duplicates, 3 cases from outside France, 21 false notifications, and 9 lost records were excluded). The NRC recorded 31 culture-positive cases, including 21 known by the MNTB. When the capture-recapture method was applied to the reported culture-positive meningitis cases, the estimated number of meningitis cases was 41 and the incidence was 0.7 cases per million. Sensitivity was 75.6% for the NRC, 68.3% for the MNTB, and 92.7% for both systems together. When sensitivity of the MNTB for culture-positive cases was applied to culture-negative meningitis, the total estimated number of culture-negative meningitis cases was 50 and the incidence was 0.85 cases per million. TBM is underestimated in France. Capture-recapture analysis using different sources to better estimate its incidence is of great interest.

  13. Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Tolson, Bryan

    2017-04-01

    The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters or model processes. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method independency of the convergence testing method, we applied it to three widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991, Campolongo et al., 2000), the variance-based Sobol' method (Solbol' 1993, Saltelli et al. 2010) and a derivative-based method known as Parameter Importance index (Goehler et al. 2013). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. Subsequently, we focus on the model-independency by testing the frugal method using the hydrologic model mHM (www.ufz.de/mhm) with about 50 model parameters. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an efficient way. The appealing feature of this new technique is the necessity of no further model evaluation and therefore enables checking of already processed (and published) sensitivity results. This is one step towards reliable and transferable, published sensitivity results.

  14. Thermal analysis of microlens formation on a sensitized gelatin layer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muric, Branka; Pantelic, Dejan; Vasiljevic, Darko

    2009-07-01

    We analyze a mechanism of direct laser writing of microlenses. We find that thermal effects and photochemical reactions are responsible for microlens formation on a sensitized gelatin layer. An infrared camera was used to assess the temperature distribution during the microlens formation, while the diffraction pattern produced by the microlens itself was used to estimate optical properties. The study of thermal processes enabled us to establish the correlation between thermal and optical parameters.

  15. Sex estimation from sternal measurements using multidetector computed tomography.

    PubMed

    Ekizoglu, Oguzhan; Hocaoglu, Elif; Inci, Ercan; Bilgili, Mustafa Gokhan; Solmaz, Dilek; Erdil, Irem; Can, Ismail Ozgur

    2014-12-01

    We aimed to show the utility and reliability of sternal morphometric analysis for sex estimation.Sex estimation is a very important step in forensic identification. Skeletal surveys are main methods for sex estimation studies. Morphometric analysis of sternum may provide high accuracy rated data in sex discrimination. In this study, morphometric analysis of sternum was evaluated in 1 mm chest computed tomography scans for sex estimation. Four hundred forty 3 subjects (202 female, 241 male, mean age: 44 ± 8.1 [distribution: 30-60 year old]) were included the study. Manubrium length (ML), mesosternum length (2L), Sternebra 1 (S1W), and Sternebra 3 (S3W) width were measured and also sternal index (SI) was calculated. Differences between genders were evaluated by student t-test. Predictive factors of sex were determined by discrimination analysis and receiver operating characteristic (ROC) analysis. Male sternal measurement values are significantly higher than females (P < 0.001) while SI is significantly low in males (P < 0.001). In discrimination analysis, MSL has high accuracy rate with 80.2% in females and 80.9% in males. MSL also has the best sensitivity (75.9%) and specificity (87.6%) values. Accuracy rates were above 80% in 3 stepwise discrimination analysis for both sexes. Stepwise 1 (ML, MSL, S1W, S3W) has the highest accuracy rate in stepwise discrimination analysis with 86.1% in females and 83.8% in males. Our study showed that morphometric computed tomography analysis of sternum might provide important information for sex estimation.

  16. Global sensitivity analysis of a filtration model for submerged anaerobic membrane bioreactors (AnMBR).

    PubMed

    Robles, A; Ruano, M V; Ribes, J; Seco, A; Ferrer, J

    2014-04-01

    The results of a global sensitivity analysis of a filtration model for submerged anaerobic MBRs (AnMBRs) are assessed in this paper. This study aimed to (1) identify the less- (or non-) influential factors of the model in order to facilitate model calibration and (2) validate the modelling approach (i.e. to determine the need for each of the proposed factors to be included in the model). The sensitivity analysis was conducted using a revised version of the Morris screening method. The dynamic simulations were conducted using long-term data obtained from an AnMBR plant fitted with industrial-scale hollow-fibre membranes. Of the 14 factors in the model, six were identified as influential, i.e. those calibrated using off-line protocols. A dynamic calibration (based on optimisation algorithms) of these influential factors was conducted. The resulting estimated model factors accurately predicted membrane performance. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. A flexible, interpretable framework for assessing sensitivity to unmeasured confounding.

    PubMed

    Dorie, Vincent; Harada, Masataka; Carnegie, Nicole Bohme; Hill, Jennifer

    2016-09-10

    When estimating causal effects, unmeasured confounding and model misspecification are both potential sources of bias. We propose a method to simultaneously address both issues in the form of a semi-parametric sensitivity analysis. In particular, our approach incorporates Bayesian Additive Regression Trees into a two-parameter sensitivity analysis strategy that assesses sensitivity of posterior distributions of treatment effects to choices of sensitivity parameters. This results in an easily interpretable framework for testing for the impact of an unmeasured confounder that also limits the number of modeling assumptions. We evaluate our approach in a large-scale simulation setting and with high blood pressure data taken from the Third National Health and Nutrition Examination Survey. The model is implemented as open-source software, integrated into the treatSens package for the R statistical programming language. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  18. Evaluation and recommendation of sensitivity analysis methods for application to Stochastic Human Exposure and Dose Simulation models.

    PubMed

    Mokhtari, Amirhossein; Christopher Frey, H; Zheng, Junyu

    2006-11-01

    Sensitivity analyses of exposure or risk models can help identify the most significant factors to aid in risk management or to prioritize additional research to reduce uncertainty in the estimates. However, sensitivity analysis is challenged by non-linearity, interactions between inputs, and multiple days or time scales. Selected sensitivity analysis methods are evaluated with respect to their applicability to human exposure models with such features using a testbed. The testbed is a simplified version of a US Environmental Protection Agency's Stochastic Human Exposure and Dose Simulation (SHEDS) model. The methods evaluated include the Pearson and Spearman correlation, sample and rank regression, analysis of variance, Fourier amplitude sensitivity test (FAST), and Sobol's method. The first five methods are known as "sampling-based" techniques, wheras the latter two methods are known as "variance-based" techniques. The main objective of the test cases was to identify the main and total contributions of individual inputs to the output variance. Sobol's method and FAST directly quantified these measures of sensitivity. Results show that sensitivity of an input typically changed when evaluated under different time scales (e.g., daily versus monthly). All methods provided similar insights regarding less important inputs; however, Sobol's method and FAST provided more robust insights with respect to sensitivity of important inputs compared to the sampling-based techniques. Thus, the sampling-based methods can be used in a screening step to identify unimportant inputs, followed by application of more computationally intensive refined methods to a smaller set of inputs. The implications of time variation in sensitivity results for risk management are briefly discussed.

  19. Estimation of real-time runway surface contamination using flight data recorder parameters

    NASA Astrophysics Data System (ADS)

    Curry, Donovan

    Within this research effort, the development of an analytic process for friction coefficient estimation is presented. Under static equilibrium, the sum of forces and moments acting on the aircraft, in the aircraft body coordinate system, while on the ground at any instant is equal to zero. Under this premise the longitudinal, lateral and normal forces due to landing are calculated along with the individual deceleration components existent when an aircraft comes to a rest during ground roll. In order to validate this hypothesis a six degree of freedom aircraft model had to be created and landing tests had to be simulated on different surfaces. The simulated aircraft model includes a high fidelity aerodynamic model, thrust model, landing gear model, friction model and antiskid model. Three main surfaces were defined in the friction model; dry, wet and snow/ice. Only the parameters recorded by an FDR are used directly from the aircraft model all others are estimated or known a priori. The estimation of unknown parameters is also presented in the research effort. With all needed parameters a comparison and validation with simulated and estimated data, under different runway conditions, is performed. Finally, this report presents results of a sensitivity analysis in order to provide a measure of reliability of the analytic estimation process. Linear and non-linear sensitivity analysis has been performed in order to quantify the level of uncertainty implicit in modeling estimated parameters and how they can affect the calculation of the instantaneous coefficient of friction. Using the approach of force and moment equilibrium about the CG at landing to reconstruct the instantaneous coefficient of friction appears to be a reasonably accurate estimate when compared to the simulated friction coefficient. This is also true when the FDR and estimated parameters are introduced to white noise and when crosswind is introduced to the simulation. After the linear analysis the results show the minimum frequency at which the algorithm still provides moderately accurate data is at 2Hz. In addition, the linear analysis shows that with estimated parameters increased and decreased up to 25% at random, high priority parameters have to be accurate to within at least +/-5% to have an effect of less than 1% change in the average coefficient of friction. Non-linear analysis results show that the algorithm can be considered reasonably accurate for all simulated cases when inaccuracies in the estimated parameters vary randomly and simultaneously up to +/-27%. At worst-case the maximum percentage change in average coefficient of friction is less than 10% for all surfaces.

  20. Sensitivity analysis of radionuclides atmospheric dispersion following the Fukushima accident

    NASA Astrophysics Data System (ADS)

    Girard, Sylvain; Korsakissok, Irène; Mallet, Vivien

    2014-05-01

    Atmospheric dispersion models are used in response to accidental releases with two purposes: - minimising the population exposure during the accident; - complementing field measurements for the assessment of short and long term environmental and sanitary impacts. The predictions of these models are subject to considerable uncertainties of various origins. Notably, input data, such as meteorological fields or estimations of emitted quantities as function of time, are highly uncertain. The case studied here is the atmospheric release of radionuclides following the Fukushima Daiichi disaster. The model used in this study is Polyphemus/Polair3D, from which derives IRSN's operational long distance atmospheric dispersion model ldX. A sensitivity analysis was conducted in order to estimate the relative importance of a set of identified uncertainty sources. The complexity of this task was increased by four characteristics shared by most environmental models: - high dimensional inputs; - correlated inputs or inputs with complex structures; - high dimensional output; - multiplicity of purposes that require sophisticated and non-systematic post-processing of the output. The sensitivities of a set of outputs were estimated with the Morris screening method. The input ranking was highly dependent on the considered output. Yet, a few variables, such as horizontal diffusion coefficient or clouds thickness, were found to have a weak influence on most of them and could be discarded from further studies. The sensitivity analysis procedure was also applied to indicators of the model performance computed on a set of gamma dose rates observations. This original approach is of particular interest since observations could be used later to calibrate the input variables probability distributions. Indeed, only the variables that are influential on performance scores are likely to allow for calibration. An indicator based on emission peaks time matching was elaborated in order to complement classical statistical scores which were dominated by deposit dose rates and almost insensitive to lower atmosphere dose rates. The substantial sensitivity of these performance indicators is auspicious for future calibration attempts and indicates that the simple perturbations used here may be sufficient to represent an essential part of the overall uncertainty.

  1. A learning framework for age rank estimation based on face images with scattering transform.

    PubMed

    Chang, Kuang-Yu; Chen, Chu-Song

    2015-03-01

    This paper presents a cost-sensitive ordinal hyperplanes ranking algorithm for human age estimation based on face images. The proposed approach exploits relative-order information among the age labels for rank prediction. In our approach, the age rank is obtained by aggregating a series of binary classification results, where cost sensitivities among the labels are introduced to improve the aggregating performance. In addition, we give a theoretical analysis on designing the cost of individual binary classifier so that the misranking cost can be bounded by the total misclassification costs. An efficient descriptor, scattering transform, which scatters the Gabor coefficients and pooled with Gaussian smoothing in multiple layers, is evaluated for facial feature extraction. We show that this descriptor is a generalization of conventional bioinspired features and is more effective for face-based age inference. Experimental results demonstrate that our method outperforms the state-of-the-art age estimation approaches.

  2. Consumer Choice of E85 Denatured Ethanol Fuel Blend: Price Sensitivity and Cost of Limited Fuel Availability

    DOE PAGES

    Liu, Changzheng; Greene, David

    2014-12-01

    The promotion of greater use of E85, a fuel blend of 85% denatured ethanol, by flex-fuel vehicle owners is an important means of complying with the Renewable Fuel Standard 2. A good understanding of factors affecting E85 demand is necessary for effective policies that promote E85 and for developing models that forecast E85 sales in the United States. In this paper, the sensitivity of aggregate E85 demand to E85 and gasoline prices is estimated, as is the relative availability of E85 versus gasoline. The econometric analysis uses recent data from Minnesota, North Dakota, and Iowa. The more recent data allowmore » a better estimate of nonfleet demand and indicate that the market price elasticity of E85 choice is substantially higher than previously estimated.« less

  3. Occupancy estimation and the closure assumption

    USGS Publications Warehouse

    Rota, Christopher T.; Fletcher, Robert J.; Dorazio, Robert M.; Betts, Matthew G.

    2009-01-01

    1. Recent advances in occupancy estimation that adjust for imperfect detection have provided substantial improvements over traditional approaches and are receiving considerable use in applied ecology. To estimate and adjust for detectability, occupancy modelling requires multiple surveys at a site and requires the assumption of 'closure' between surveys, i.e. no changes in occupancy between surveys. Violations of this assumption could bias parameter estimates; however, little work has assessed model sensitivity to violations of this assumption or how commonly such violations occur in nature. 2. We apply a modelling procedure that can test for closure to two avian point-count data sets in Montana and New Hampshire, USA, that exemplify time-scales at which closure is often assumed. These data sets illustrate different sampling designs that allow testing for closure but are currently rarely employed in field investigations. Using a simulation study, we then evaluate the sensitivity of parameter estimates to changes in site occupancy and evaluate a power analysis developed for sampling designs that is aimed at limiting the likelihood of closure. 3. Application of our approach to point-count data indicates that habitats may frequently be open to changes in site occupancy at time-scales typical of many occupancy investigations, with 71% and 100% of species investigated in Montana and New Hampshire respectively, showing violation of closure across time periods of 3 weeks and 8 days respectively. 4. Simulations suggest that models assuming closure are sensitive to changes in occupancy. Power analyses further suggest that the modelling procedure we apply can effectively test for closure. 5. Synthesis and applications. Our demonstration that sites may be open to changes in site occupancy over time-scales typical of many occupancy investigations, combined with the sensitivity of models to violations of the closure assumption, highlights the importance of properly addressing the closure assumption in both sampling designs and analysis. Furthermore, inappropriately applying closed models could have negative consequences when monitoring rare or declining species for conservation and management decisions, because violations of closure typically lead to overestimates of the probability of occurrence.

  4. Application of Image Analysis for Characterization of Spatial Arrangements of Features in Microstructure

    NASA Technical Reports Server (NTRS)

    Louis, Pascal; Gokhale, Arun M.

    1995-01-01

    A number of microstructural processes are sensitive to the spatial arrangements of features in microstructure. However, very little attention has been given in the past to the experimental measurements of the descriptors of microstructural distance distributions due to the lack of practically feasible methods. We present a digital image analysis procedure to estimate the micro-structural distance distributions. The application of the technique is demonstrated via estimation of K function, radial distribution function, and nearest-neighbor distribution function of hollow spherical carbon particulates in a polymer matrix composite, observed in a metallographic section.

  5. Adaptation of an urban land surface model to a tropical suburban area: Offline evaluation, sensitivity analysis, and optimization of TEB/ISBA (SURFEX)

    NASA Astrophysics Data System (ADS)

    Harshan, Suraj

    The main objective of the present thesis is the improvement of the TEB/ISBA (SURFEX) urban land surface model (ULSM) through comprehensive evaluation, sensitivity analysis, and optimization experiments using energy balance and radiative and air temperature data observed during 11 months at a tropical sub-urban site in Singapore. Overall the performance of the model is satisfactory, with a small underestimation of net radiation and an overestimation of sensible heat flux. Weaknesses in predicting the latent heat flux are apparent with smaller model values during daytime and the model also significantly underpredicts both the daytime peak and nighttime storage heat. Surface temperatures of all facets are generally overpredicted. Significant variation exists in the model behaviour between dry and wet seasons. The vegetation parametrization used in the model is inadequate to represent the moisture dynamics, producing unrealistically low latent heat fluxes during a particularly dry period. The comprehensive evaluation of the USLM shows the need for accurate estimation of input parameter values for present site. Since obtaining many of these parameters through empirical methods is not feasible, the present study employed a two step approach aimed at providing information about the most sensitive parameters and an optimized parameter set from model calibration. Two well established sensitivity analysis methods (global: Sobol and local: Morris) and a state-of-the-art multiobjective evolutionary algorithm (Borg) were employed for sensitivity analysis and parameter estimation. Experiments were carried out for three different weather periods. The analysis indicates that roof related parameters are the most important ones in controlling the behaviour of the sensible heat flux and net radiation flux, with roof and road albedo as the most influential parameters. Soil moisture initialization parameters are important in controlling the latent heat flux. The built (town) fraction has a significant influence on all fluxes considered. Comparison between the Sobol and Morris methods shows similar sensitivities, indicating the robustness of the present analysis and that the Morris method can be employed as a computationally cheaper alternative of Sobol's method. Optimization as well as the sensitivity experiments for the three periods (dry, wet and mixed), show a noticeable difference in parameter sensitivity and parameter convergence, indicating inadequacies in model formulation. Existence of a significant proportion of less sensitive parameters might be indicating an over-parametrized model. Borg MOEA showed great promise in optimizing the input parameters set. The optimized model modified using the site specific values for thermal roughness length parametrization shows an improvement in the performances of outgoing longwave radiation flux, overall surface temperature, heat storage flux and sensible heat flux.

  6. Commercial Serological Tests for the Diagnosis of Active Pulmonary and Extrapulmonary Tuberculosis: An Updated Systematic Review and Meta-Analysis

    PubMed Central

    Steingart, Karen R.; Flores, Laura L.; Dendukuri, Nandini; Schiller, Ian; Laal, Suman; Ramsay, Andrew; Hopewell, Philip C.; Pai, Madhukar

    2011-01-01

    Background Serological (antibody detection) tests for tuberculosis (TB) are widely used in developing countries. As part of a World Health Organization policy process, we performed an updated systematic review to assess the diagnostic accuracy of commercial serological tests for pulmonary and extrapulmonary TB with a focus on the relevance of these tests in low- and middle-income countries. Methods and Findings We used methods recommended by the Cochrane Collaboration and GRADE approach for rating quality of evidence. In a previous review, we searched multiple databases for papers published from 1 January 1990 to 30 May 2006, and in this update, we add additional papers published from that period until 29 June 2010. We prespecified subgroups to address heterogeneity and summarized test performance using bivariate random effects meta-analysis. For pulmonary TB, we included 67 studies (48% from low- and middle-income countries) with 5,147 participants. For all tests, estimates were variable for sensitivity (0% to 100%) and specificity (31% to 100%). For anda-TB IgG, the only test with enough studies for meta-analysis, pooled sensitivity was 76% (95% CI 63%–87%) in smear-positive (seven studies) and 59% (95% CI 10%–96%) in smear-negative (four studies) patients; pooled specificities were 92% (95% CI 74%–98%) and 91% (95% CI 79%–96%), respectively. Compared with ELISA (pooled sensitivity 60% [95% CI 6%–65%]; pooled specificity 98% [95% CI 96%–99%]), immunochromatographic tests yielded lower pooled sensitivity (53%, 95% CI 42%–64%) and comparable pooled specificity (98%, 95% CI 94%–99%). For extrapulmonary TB, we included 25 studies (40% from low- and middle-income countries) with 1,809 participants. For all tests, estimates were variable for sensitivity (0% to 100%) and specificity (59% to 100%). Overall, quality of evidence was graded very low for studies of pulmonary and extrapulmonary TB. Conclusions Despite expansion of the literature since 2006, commercial serological tests continue to produce inconsistent and imprecise estimates of sensitivity and specificity. Quality of evidence remains very low. These data informed a recently published World Health Organization policy statement against serological tests. Please see later in the article for the Editors' Summary PMID:21857806

  7. Sensitivity of the reference evapotranspiration to key climatic variables during the growing season in the Ejina oasis northwest China.

    PubMed

    Hou, Lan-Gong; Zou, Song-Bing; Xiao, Hong-Lang; Yang, Yong-Gang

    2013-01-01

    The standardized FAO56 Penman-Monteith model, which has been the most reasonable method in both humid and arid climatic conditions, provides reference evapotranspiration (ETo) estimates for planning and efficient use of agricultural water resources. And sensitivity analysis is important in understanding the relative importance of climatic variables to the variation of reference evapotranspiration. In this study, a non-dimensional relative sensitivity coefficient was employed to predict responses of ETo to perturbations of four climatic variables in the Ejina oasis northwest China. A 20-year historical dataset of daily air temperature, wind speed, relative humidity and daily sunshine duration in the Ejina oasis was used in the analysis. Results have shown that daily sensitivity coefficients exhibited large fluctuations during the growing season, and shortwave radiation was the most sensitive variable in general for the Ejina oasis, followed by air temperature, wind speed and relative humidity. According to this study, the response of ETo can be preferably predicted under perturbation of air temperature, wind speed, relative humidity and shortwave radiation by their sensitivity coefficients.

  8. NUMERICAL FLOW AND TRANSPORT SIMULATIONS SUPPORTING THE SALTSTONE FACILITY PERFORMANCE ASSESSMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flach, G.

    2009-02-28

    The Saltstone Disposal Facility Performance Assessment (PA) is being revised to incorporate requirements of Section 3116 of the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 (NDAA), and updated data and understanding of vault performance since the 1992 PA (Cook and Fowler 1992) and related Special Analyses. A hybrid approach was chosen for modeling contaminant transport from vaults and future disposal cells to exposure points. A higher resolution, largely deterministic, analysis is performed on a best-estimate Base Case scenario using the PORFLOW numerical analysis code. a few additional sensitivity cases are simulated to examine alternative scenarios andmore » parameter settings. Stochastic analysis is performed on a simpler representation of the SDF system using the GoldSim code to estimate uncertainty and sensitivity about the Base Case. This report describes development of PORFLOW models supporting the SDF PA, and presents sample results to illustrate model behaviors and define impacts relative to key facility performance objectives. The SDF PA document, when issued, should be consulted for a comprehensive presentation of results.« less

  9. The cross-cut statistic and its sensitivity to bias in observational studies with ordered doses of treatment.

    PubMed

    Rosenbaum, Paul R

    2016-03-01

    A common practice with ordered doses of treatment and ordered responses, perhaps recorded in a contingency table with ordered rows and columns, is to cut or remove a cross from the table, leaving the outer corners--that is, the high-versus-low dose, high-versus-low response corners--and from these corners to compute a risk or odds ratio. This little remarked but common practice seems to be motivated by the oldest and most familiar method of sensitivity analysis in observational studies, proposed by Cornfield et al. (1959), which says that to explain a population risk ratio purely as bias from an unobserved binary covariate, the prevalence ratio of the covariate must exceed the risk ratio. Quite often, the largest risk ratio, hence the one least sensitive to bias by this standard, is derived from the corners of the ordered table with the central cross removed. Obviously, the corners use only a portion of the data, so a focus on the corners has consequences for the standard error as well as for bias, but sampling variability was not a consideration in this early and familiar form of sensitivity analysis, where point estimates replaced population parameters. Here, this cross-cut analysis is examined with the aid of design sensitivity and the power of a sensitivity analysis. © 2015, The International Biometric Society.

  10. Probabilistic risk assessment for a loss of coolant accident in McMaster Nuclear Reactor and application of reliability physics model for modeling human reliability

    NASA Astrophysics Data System (ADS)

    Ha, Taesung

    A probabilistic risk assessment (PRA) was conducted for a loss of coolant accident, (LOCA) in the McMaster Nuclear Reactor (MNR). A level 1 PRA was completed including event sequence modeling, system modeling, and quantification. To support the quantification of the accident sequence identified, data analysis using the Bayesian method and human reliability analysis (HRA) using the accident sequence evaluation procedure (ASEP) approach were performed. Since human performance in research reactors is significantly different from that in power reactors, a time-oriented HRA model (reliability physics model) was applied for the human error probability (HEP) estimation of the core relocation. This model is based on two competing random variables: phenomenological time and performance time. The response surface and direct Monte Carlo simulation with Latin Hypercube sampling were applied for estimating the phenomenological time, whereas the performance time was obtained from interviews with operators. An appropriate probability distribution for the phenomenological time was assigned by statistical goodness-of-fit tests. The human error probability (HEP) for the core relocation was estimated from these two competing quantities: phenomenological time and operators' performance time. The sensitivity of each probability distribution in human reliability estimation was investigated. In order to quantify the uncertainty in the predicted HEPs, a Bayesian approach was selected due to its capability of incorporating uncertainties in model itself and the parameters in that model. The HEP from the current time-oriented model was compared with that from the ASEP approach. Both results were used to evaluate the sensitivity of alternative huinan reliability modeling for the manual core relocation in the LOCA risk model. This exercise demonstrated the applicability of a reliability physics model supplemented with a. Bayesian approach for modeling human reliability and its potential usefulness of quantifying model uncertainty as sensitivity analysis in the PRA model.

  11. Tau-independent Phase Analysis: A Novel Method for Accurately Determining Phase Shifts.

    PubMed

    Tackenberg, Michael C; Jones, Jeff R; Page, Terry L; Hughey, Jacob J

    2018-06-01

    Estimations of period and phase are essential in circadian biology. While many techniques exist for estimating period, comparatively few methods are available for estimating phase. Current approaches to analyzing phase often vary between studies and are sensitive to coincident changes in period and the stage of the circadian cycle at which the stimulus occurs. Here we propose a new technique, tau-independent phase analysis (TIPA), for quantifying phase shifts in multiple types of circadian time-course data. Through comprehensive simulations, we show that TIPA is both more accurate and more precise than the standard actogram approach. TIPA is computationally simple and therefore will enable accurate and reproducible quantification of phase shifts across multiple subfields of chronobiology.

  12. Introduction and application of the multiscale coefficient of variation analysis.

    PubMed

    Abney, Drew H; Kello, Christopher T; Balasubramaniam, Ramesh

    2017-10-01

    Quantifying how patterns of behavior relate across multiple levels of measurement typically requires long time series for reliable parameter estimation. We describe a novel analysis that estimates patterns of variability across multiple scales of analysis suitable for time series of short duration. The multiscale coefficient of variation (MSCV) measures the distance between local coefficient of variation estimates within particular time windows and the overall coefficient of variation across all time samples. We first describe the MSCV analysis and provide an example analytical protocol with corresponding MATLAB implementation and code. Next, we present a simulation study testing the new analysis using time series generated by ARFIMA models that span white noise, short-term and long-term correlations. The MSCV analysis was observed to be sensitive to specific parameters of ARFIMA models varying in the type of temporal structure and time series length. We then apply the MSCV analysis to short time series of speech phrases and musical themes to show commonalities in multiscale structure. The simulation and application studies provide evidence that the MSCV analysis can discriminate between time series varying in multiscale structure and length.

  13. The cost-effectiveness of smoking cessation support delivered by mobile phone text messaging: Txt2stop.

    PubMed

    Guerriero, Carla; Cairns, John; Roberts, Ian; Rodgers, Anthony; Whittaker, Robyn; Free, Caroline

    2013-10-01

    The txt2stop trial has shown that mobile-phone-based smoking cessation support doubles biochemically validated quitting at 6 months. This study examines the cost-effectiveness of smoking cessation support delivered by mobile phone text messaging. The lifetime incremental costs and benefits of adding text-based support to current practice are estimated from a UK NHS perspective using a Markov model. The cost-effectiveness was measured in terms of cost per quitter, cost per life year gained and cost per QALY gained. As in previous studies, smokers are assumed to face a higher risk of experiencing the following five diseases: lung cancer, stroke, myocardial infarction, chronic obstructive pulmonary disease, and coronary heart disease (i.e. the main fatal or disabling, but by no means the only, adverse effects of prolonged smoking). The treatment costs and health state values associated with these diseases were identified from the literature. The analysis was based on the age and gender distribution observed in the txt2stop trial. Effectiveness and cost parameters were varied in deterministic sensitivity analyses, and a probabilistic sensitivity analysis was also performed. The cost of text-based support per 1,000 enrolled smokers is £16,120, which, given an estimated 58 additional quitters at 6 months, equates to £278 per quitter. However, when the future NHS costs saved (as a result of reduced smoking) are included, text-based support would be cost saving. It is estimated that 18 LYs are gained per 1,000 smokers (0.3 LYs per quitter) receiving text-based support, and 29 QALYs are gained (0.5 QALYs per quitter). The deterministic sensitivity analysis indicated that changes in individual model parameters did not alter the conclusion that this is a cost-effective intervention. Similarly, the probabilistic sensitivity analysis indicated a >90 % chance that the intervention will be cost saving. This study shows that under a wide variety of conditions, personalised smoking cessation advice and support by mobile phone message is both beneficial for health and cost saving to a health system.

  14. A sensitivity analysis method for the body segment inertial parameters based on ground reaction and joint moment regressor matrices.

    PubMed

    Futamure, Sumire; Bonnet, Vincent; Dumas, Raphael; Venture, Gentiane

    2017-11-07

    This paper presents a method allowing a simple and efficient sensitivity analysis of dynamics parameters of complex whole-body human model. The proposed method is based on the ground reaction and joint moment regressor matrices, developed initially in robotics system identification theory, and involved in the equations of motion of the human body. The regressor matrices are linear relatively to the segment inertial parameters allowing us to use simple sensitivity analysis methods. The sensitivity analysis method was applied over gait dynamics and kinematics data of nine subjects and with a 15 segments 3D model of the locomotor apparatus. According to the proposed sensitivity indices, 76 segments inertial parameters out the 150 of the mechanical model were considered as not influent for gait. The main findings were that the segment masses were influent and that, at the exception of the trunk, moment of inertia were not influent for the computation of the ground reaction forces and moments and the joint moments. The same method also shows numerically that at least 90% of the lower-limb joint moments during the stance phase can be estimated only from a force-plate and kinematics data without knowing any of the segment inertial parameters. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Thermal affinity as the dominant factor changing Mediterranean fish abundances.

    PubMed

    Givan, Or; Edelist, Dor; Sonin, Oren; Belmaker, Jonathan

    2018-01-01

    Recent decades have seen profound changes in species abundance and community composition. In the marine environment, the major anthropogenic drivers of change comprise exploitation, invasion by nonindigenous species, and climate change. However, the magnitude of these stressors has been widely debated and we lack empirical estimates of their relative importance. In this study, we focused on Eastern Mediterranean, a region exposed to an invasion of species of Red Sea origin, extreme climate change, and high fishing pressure. We estimated changes in fish abundance using two fish trawl surveys spanning a 20-year period, and correlated these changes with estimated sensitivity of species to the different stressors. We estimated sensitivity to invasion using the trait similarity between indigenous and nonindigenous species; sensitivity to fishing using a published composite index based on the species' life-history; and sensitivity to climate change using species climatic affinity based on occurrence data. Using both a meta-analytical method and random forest analysis, we found that for shallow-water species the most important driver of population size changes is sensitivity to climate change. Species with an affinity to warm climates increased in relative abundance and species with an affinity to cold climates decreased suggesting a strong response to warming local sea temperatures over recent decades. This decrease in the abundance of cold-water-associated species at the trailing "warm" end of their distribution has been rarely documented. Despite the immense biomass of nonindigenous species and the presumed high fishing pressure, these two latter factors seem to have only a minor role in explaining abundance changes. The decline in abundance of indigenous species of cold-water origin indicates a future major restructuring of fish communities in the Mediterranean in response to the ongoing warming, with unknown impacts on ecosystem function. © 2017 John Wiley & Sons Ltd.

  16. Postmodeling Sensitivity Analysis to Detect the Effect of Missing Data Mechanisms

    ERIC Educational Resources Information Center

    Jamshidian, Mortaza; Mata, Matthew

    2008-01-01

    Incomplete or missing data is a common problem in almost all areas of empirical research. It is well known that simple and ad hoc methods such as complete case analysis or mean imputation can lead to biased and/or inefficient estimates. The method of maximum likelihood works well; however, when the missing data mechanism is not one of missing…

  17. Sensitivity analysis of reference evapotranspiration to sensor accuracy

    USDA-ARS?s Scientific Manuscript database

    Meteorological sensor networks are often used across agricultural regions to calculate the ASCE Standardized Reference ET Equation, and inaccuracies in individual sensors can lead to inaccuracies in ET estimates. Multiyear datasets from the semi-arid Colorado Agricultural Meteorological (CoAgMet) an...

  18. Structural reliability methods: Code development status

    NASA Astrophysics Data System (ADS)

    Millwater, Harry R.; Thacker, Ben H.; Wu, Y.-T.; Cruse, T. A.

    1991-05-01

    The Probabilistic Structures Analysis Method (PSAM) program integrates state of the art probabilistic algorithms with structural analysis methods in order to quantify the behavior of Space Shuttle Main Engine structures subject to uncertain loadings, boundary conditions, material parameters, and geometric conditions. An advanced, efficient probabilistic structural analysis software program, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) was developed as a deliverable. NESSUS contains a number of integrated software components to perform probabilistic analysis of complex structures. A nonlinear finite element module NESSUS/FEM is used to model the structure and obtain structural sensitivities. Some of the capabilities of NESSUS/FEM are shown. A Fast Probability Integration module NESSUS/FPI estimates the probability given the structural sensitivities. A driver module, PFEM, couples the FEM and FPI. NESSUS, version 5.0, addresses component reliability, resistance, and risk.

  19. Structural reliability methods: Code development status

    NASA Technical Reports Server (NTRS)

    Millwater, Harry R.; Thacker, Ben H.; Wu, Y.-T.; Cruse, T. A.

    1991-01-01

    The Probabilistic Structures Analysis Method (PSAM) program integrates state of the art probabilistic algorithms with structural analysis methods in order to quantify the behavior of Space Shuttle Main Engine structures subject to uncertain loadings, boundary conditions, material parameters, and geometric conditions. An advanced, efficient probabilistic structural analysis software program, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) was developed as a deliverable. NESSUS contains a number of integrated software components to perform probabilistic analysis of complex structures. A nonlinear finite element module NESSUS/FEM is used to model the structure and obtain structural sensitivities. Some of the capabilities of NESSUS/FEM are shown. A Fast Probability Integration module NESSUS/FPI estimates the probability given the structural sensitivities. A driver module, PFEM, couples the FEM and FPI. NESSUS, version 5.0, addresses component reliability, resistance, and risk.

  20. Mathematical modelling of non-stationary fluctuation analysis for studying channel properties of synaptic AMPA receptors

    PubMed Central

    Benke, Timothy A; Lüthi, Andreas; Palmer, Mary J; Wikström, Martin A; Anderson, William W; Isaac, John T R; Collingridge, Graham L

    2001-01-01

    The molecular properties of synaptic α-amino-3-hydroxy-5-methyl-4-isoxazolepropionate (AMPA) receptors are an important factor determining excitatory synaptic transmission in the brain. Changes in the number (N) or single-channel conductance (γ) of functional AMPA receptors may underlie synaptic plasticity, such as long-term potentiation (LTP) and long-term depression (LTD). These parameters have been estimated using non-stationary fluctuation analysis (NSFA). The validity of NSFA for studying the channel properties of synaptic AMPA receptors was assessed using a cable model with dendritic spines and a microscopic kinetic description of AMPA receptors. Electrotonic, geometric and kinetic parameters were altered in order to determine their effects on estimates of the underlying γ. Estimates of γ were very sensitive to the access resistance of the recording (RA) and the mean open time of AMPA channels. Estimates of γ were less sensitive to the distance between the electrode and the synaptic site, the electrotonic properties of dendritic structures, recording electrode capacitance and background noise. Estimates of γ were insensitive to changes in spine morphology, synaptic glutamate concentration and the peak open probability (Po) of AMPA receptors. The results obtained using the model agree with biological data, obtained from 91 dendritic recordings from rat CA1 pyramidal cells. A correlation analysis showed that RA resulted in a slowing of the decay time constant of excitatory postsynaptic currents (EPSCs) by approximately 150 %, from an estimated value of 3.1 ms. RA also greatly attenuated the absolute estimate of γ by approximately 50-70 %. When other parameters remain constant, the model demonstrates that NSFA of dendritic recordings can readily discriminate between changes in γvs. changes in N or Po. Neither background noise nor asynchronous activation of multiple synapses prevented reliable discrimination between changes in γ and changes in either N or Po. The model (available online) can be used to predict how changes in the different properties of AMPA receptors may influence synaptic transmission and plasticity. PMID:11731574

  1. Coal resources in environmentally-sensitive lands under federal management

    USGS Publications Warehouse

    Watson, William D.; Tully, John K.; Moser, Edward N.; Dee, David P.; Bryant, Karen; Schall, Richard; Allan, Harold A.

    1995-01-01

    This report presents estimates of coal-bearing acreage and coal tonnage in environmentally-sensitive areas. The analysis was conducted to provide data for rulemaking by the Federal Office of Surface Mining (Watson and others, 1995). The rulemaking clarifies conditions under which coal can be mined in environmentally-sensitive areas. The area of the U.S. is about 2.3 billion acres. Contained within that acreage are certain environmentally-sensitive and unique areas (including parks, forests, and various other Federal land preserves). These areas are afforded special protection under Federal and State law. Altogether these protected areas occupy about 400 million acres. This report assesses coal acreage and coal tonnage in these protected Federal land preserves. Results are presented in the form of 8 map-displays prepared using GIS methods at a national scale. Tables and charts that accompany each map provide estimates of the total acreage in Federal land preserve units that overlap or fall within coal fields, coal-bearing acreage in each unit, and coal tonnage in each unit. Summary charts, compiled from the maps, indicate that about 8% of the Nation's coal reserves are located within environmentally-sensitive Federal land preserves.

  2. Global Sensitivity of Simulated Water Balance Indicators Under Future Climate Change in the Colorado Basin

    NASA Astrophysics Data System (ADS)

    Bennett, Katrina E.; Urrego Blanco, Jorge R.; Jonko, Alexandra; Bohn, Theodore J.; Atchley, Adam L.; Urban, Nathan M.; Middleton, Richard S.

    2018-01-01

    The Colorado River Basin is a fundamentally important river for society, ecology, and energy in the United States. Streamflow estimates are often provided using modeling tools which rely on uncertain parameters; sensitivity analysis can help determine which parameters impact model results. Despite the fact that simulated flows respond to changing climate and vegetation in the basin, parameter sensitivity of the simulations under climate change has rarely been considered. In this study, we conduct a global sensitivity analysis to relate changes in runoff, evapotranspiration, snow water equivalent, and soil moisture to model parameters in the Variable Infiltration Capacity (VIC) hydrologic model. We combine global sensitivity analysis with a space-filling Latin Hypercube Sampling of the model parameter space and statistical emulation of the VIC model to examine sensitivities to uncertainties in 46 model parameters following a variance-based approach. We find that snow-dominated regions are much more sensitive to uncertainties in VIC parameters. Although baseflow and runoff changes respond to parameters used in previous sensitivity studies, we discover new key parameter sensitivities. For instance, changes in runoff and evapotranspiration are sensitive to albedo, while changes in snow water equivalent are sensitive to canopy fraction and Leaf Area Index (LAI) in the VIC model. It is critical for improved modeling to narrow uncertainty in these parameters through improved observations and field studies. This is important because LAI and albedo are anticipated to change under future climate and narrowing uncertainty is paramount to advance our application of models such as VIC for water resource management.

  3. SEE Sensitivity Analysis of 180 nm NAND CMOS Logic Cell for Space Applications

    NASA Astrophysics Data System (ADS)

    Sajid, Muhammad

    2016-07-01

    This paper focus on Single Event Effects caused by energetic particle strike on sensitive locations in CMOS NAND logic cell designed in 180nm technology node to be operated in space radiation environment. The generation of SE transients as well as upsets as function of LET of incident particle has been determined for logic devices onboard LEO and GEO satellites. The minimum magnitude pulse and pulse-width for threshold LET was determined to estimate the vulnerability /susceptibility of device for heavy ion strike. The impact of temperature, strike location and logic state of NAND circuit on total SEU/SET rate was estimated with physical mechanism simulations using Visual TCAD, Genius, runSEU program and Crad computer codes.

  4. Measurement of the Muon Production Depths at the Pierre Auger Observatory

    DOE PAGES

    Collica, Laura

    2016-09-08

    The muon content of extensive air showers is an observable sensitive to the primary composition and to the hadronic interaction properties. The Pierre Auger Observatory uses water-Cherenkov detectors to measure particle densities at the ground and therefore is sensitive to the muon content of air showers. We present here a method which allows us to estimate the muon production depths by exploiting the measurement of the muon arrival times at the ground recorded with the Surface Detector of the Pierre Auger Observatory. The analysis is performed in a large range of zenith angles, thanks to the capability of estimating and subtracting the electromagnetic component, and for energies betweenmore » $$10^{19.2}$$ and $$10^{20}$$ eV.« less

  5. Economic evaluation of an implementation strategy for the management of low back pain in general practice.

    PubMed

    Jensen, Cathrine Elgaard; Riis, Allan; Petersen, Karin Dam; Jensen, Martin Bach; Pedersen, Kjeld Møller

    2017-05-01

    In connection with the publication of a clinical practice guideline on the management of low back pain (LBP) in general practice in Denmark, a cluster randomised controlled trial was conducted. In this trial, a multifaceted guideline implementation strategy to improve general practitioners' treatment of patients with LBP was compared with a usual implementation strategy. The aim was to determine whether the multifaceted strategy was cost effective, as compared with the usual implementation strategy. The economic evaluation was conducted as a cost-utility analysis where cost collected from a societal perspective and quality-adjusted life years were used as outcome measures. The analysis was conducted as a within-trial analysis with a 12-month time horizon consistent with the follow-up period of the clinical trial. To adjust for a priori selected covariates, generalised linear models with a gamma family were used to estimate incremental costs and quality-adjusted life years. Furthermore, both deterministic and probabilistic sensitivity analyses were conducted. Results showed that costs associated with primary health care were higher, whereas secondary health care costs were lower for the intervention group when compared with the control group. When adjusting for covariates, the intervention was less costly, and there was no significant difference in effect between the 2 groups. Sensitivity analyses showed that results were sensitive to uncertainty. In conclusion, the multifaceted implementation strategy was cost saving when compared with the usual strategy for implementing LBP clinical practice guidelines in general practice. Furthermore, there was no significant difference in effect, and the estimate was sensitive to uncertainty.

  6. The effects of physical activity on impulsive choice: Influence of sensitivity to reinforcement amount and delay.

    PubMed

    Strickland, Justin C; Feinstein, Max A; Lacy, Ryan T; Smith, Mark A

    2016-05-01

    Impulsive choice is a diagnostic feature and/or complicating factor for several psychological disorders and may be examined in the laboratory using delay-discounting procedures. Recent investigators have proposed using quantitative measures of analysis to examine the behavioral processes contributing to impulsive choice. The purpose of this study was to examine the effects of physical activity (i.e., wheel running) on impulsive choice in a single-response, discrete-trial procedure using two quantitative methods of analysis. To this end, rats were assigned to physical activity or sedentary groups and trained to respond in a delay-discounting procedure. In this procedure, one lever always produced one food pellet immediately, whereas a second lever produced three food pellets after a 0, 10, 20, 40, or 80-s delay. Estimates of sensitivity to reinforcement amount and sensitivity to reinforcement delay were determined using (1) a simple linear analysis and (2) an analysis of logarithmically transformed response ratios. Both analyses revealed that physical activity decreased sensitivity to reinforcement amount and sensitivity to reinforcement delay. These findings indicate that (1) physical activity has significant but functionally opposing effects on the behavioral processes that contribute to impulsive choice and (2) both quantitative methods of analysis are appropriate for use in single-response, discrete-trial procedures. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Risk of introducing exotic fruit flies, Ceratitis capitata, Ceratitis cosyra, and Ceratitis rosa (Diptera: Tephritidae), into southern China.

    PubMed

    Li, Baini; Ma, Jun; Hu, Xuenan; Liu, Haijun; Wu, Jiajiao; Chen, Hongjun; Zhang, Runjie

    2010-08-01

    Exotic fruit flies (Ceratitis spp.) are often serious agricultural pests. Here, we used, pathway analysis and Monte Carlo simulations to assess the risk of introduction of Ceratitis capitata (Wiedemann), Ceratitis cosyra (Walker), and Ceratitis rosa Karsch, into southern China with fruit consignments and incoming travelers. Historical data, expert opinions, relevant literature, and archives were used to set appropriate parameters in the pathway analysis. Based on the ongoing quarantine/ inspection strategies of China, as well as the interception records, we estimated the annual number of each fruit fly species entering Guangdong province undetected with commercially imported fruit, and the associated risk. We also estimated the gross number of pests arriving at Guangdong ports with incoming travelers and the associated risk. Sensitivity analysis also was performed to test the impact of parameter changes and to assess how the risk could be reduced. Results showed that the risk of introduction of the three fruit fly species into southern China with fruit consignments, which are mostly transported by ship, exists but is relatively low. In contrast, the risk of introduction with incoming travelers is high and hence deserves intensive attention. Sensitivity analysis indicated that either ensuring all shipments meet current phytosanitary requirements or increasing the proportion of fruit imports sampled for inspection could substantially reduce the risk associated with commercial imports. Sensitivity analysis also provided justification for banning importation of fresh fruit by international travelers. Thus, inspection and quarantine in conjunction with intensive detection were important mitigation measures to reduce the risk of Ceratitis spp. introduced into China.

  8. Parameter sensitivity analysis of a 1-D cold region lake model for land-surface schemes

    NASA Astrophysics Data System (ADS)

    Guerrero, José-Luis; Pernica, Patricia; Wheater, Howard; Mackay, Murray; Spence, Chris

    2017-12-01

    Lakes might be sentinels of climate change, but the uncertainty in their main feedback to the atmosphere - heat-exchange fluxes - is often not considered within climate models. Additionally, these fluxes are seldom measured, hindering critical evaluation of model output. Analysis of the Canadian Small Lake Model (CSLM), a one-dimensional integral lake model, was performed to assess its ability to reproduce diurnal and seasonal variations in heat fluxes and the sensitivity of simulated fluxes to changes in model parameters, i.e., turbulent transport parameters and the light extinction coefficient (Kd). A C++ open-source software package, Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), was used to perform sensitivity analysis (SA) and identify the parameters that dominate model behavior. The generalized likelihood uncertainty estimation (GLUE) was applied to quantify the fluxes' uncertainty, comparing daily-averaged eddy-covariance observations to the output of CSLM. Seven qualitative and two quantitative SA methods were tested, and the posterior likelihoods of the modeled parameters, obtained from the GLUE analysis, were used to determine the dominant parameters and the uncertainty in the modeled fluxes. Despite the ubiquity of the equifinality issue - different parameter-value combinations yielding equivalent results - the answer to the question was unequivocal: Kd, a measure of how much light penetrates the lake, dominates sensible and latent heat fluxes, and the uncertainty in their estimates is strongly related to the accuracy with which Kd is determined. This is important since accurate and continuous measurements of Kd could reduce modeling uncertainty.

  9. Economics in “Global Health 2035”: a sensitivity analysis of the value of a life year estimates

    PubMed Central

    Chang, Angela Y; Robinson, Lisa A; Hammitt, James K; Resch, Stephen C

    2017-01-01

    Background In “Global health 2035: a world converging within a generation,” The Lancet Commission on Investing in Health (CIH) adds the value of increased life expectancy to the value of growth in gross domestic product (GDP) when assessing national well–being. To value changes in life expectancy, the CIH relies on several strong assumptions to bridge gaps in the empirical research. It finds that the value of a life year (VLY) averages 2.3 times GDP per capita for low– and middle–income countries (LMICs) assuming the changes in life expectancy they experienced from 2000 to 2011 are permanent. Methods The CIH VLY estimate is based on a specific shift in population life expectancy and includes a 50 percent reduction for children ages 0 through 4. We investigate the sensitivity of this estimate to the underlying assumptions, including the effects of income, age, and life expectancy, and the sequencing of the calculations. Findings We find that reasonable alternative assumptions regarding the effects of income, age, and life expectancy may reduce the VLY estimates to 0.2 to 2.1 times GDP per capita for LMICs. Removing the reduction for young children increases the VLY, while reversing the sequencing of the calculations reduces the VLY. Conclusion Because the VLY is sensitive to the underlying assumptions, analysts interested in applying this approach elsewhere must tailor the estimates to the impacts of the intervention and the characteristics of the affected population. Analysts should test the sensitivity of their conclusions to reasonable alternative assumptions. More work is needed to investigate options for improving the approach. PMID:28400950

  10. Cost analysis of youth violence prevention.

    PubMed

    Sharp, Adam L; Prosser, Lisa A; Walton, Maureen; Blow, Frederic C; Chermack, Stephen T; Zimmerman, Marc A; Cunningham, Rebecca

    2014-03-01

    Effective violence interventions are not widely implemented, and there is little information about the cost of violence interventions. Our goal is to report the cost of a brief intervention delivered in the emergency department that reduces violence among 14- to 18-year-olds. Primary outcomes were total costs of implementation and the cost per violent event or violence consequence averted. We used primary and secondary data sources to derive the costs to implement a brief motivational interviewing intervention and to identify the number of self-reported violent events (eg, severe peer aggression, peer victimization) or violence consequences averted. One-way and multi-way sensitivity analyses were performed. Total fixed and variable annual costs were estimated at $71,784. If implemented, 4208 violent events or consequences could be prevented, costing $17.06 per event or consequence averted. Multi-way sensitivity analysis accounting for variable intervention efficacy and different cost estimates resulted in a range of $3.63 to $54.96 per event or consequence averted. Our estimates show that the cost to prevent an episode of youth violence or its consequences is less than the cost of placing an intravenous line and should not present a significant barrier to implementation.

  11. Estimation of hospital efficiency--do different definitions and casemix measures for hospital output affect the results?

    PubMed

    Vitikainen, Kirsi; Street, Andrew; Linna, Miika

    2009-02-01

    Hospital efficiency has been the subject of numerous health economics studies, but there is little evidence on how the chosen output and casemix measures affect the efficiency results. The aim of this study is to examine the robustness of efficiency results due to these factors. Comparison is made between activities and episode output measures, and two different output grouping systems (Classic and FullDRG). Non-parametric data envelopment analysis is used as an analysis technique. The data consist of all public acute care hospitals in Finland in 2005 (n=40). Efficiency estimates were not found to be highly sensitive to the choice between episode and activity descriptions of output, but more so to the choice of DRG grouping system. Estimates are most sensitive to scale assumptions, with evidence of decreasing returns to scale in larger hospitals. Episode measures are generally to be preferred to activity measures because these better capture the patient pathway, while FullDRGs are preferred to Classic DRGs particularly because of the better description of outpatient output in the former grouping system. Attention should be paid to reducing the extent of scale inefficiency in Finland.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolaczkowski, A.M.; Lambright, J.A.; Ferrell, W.L.

    This document contains the internal event initiated accident sequence analyses for Peach Bottom, Unit 2; one of the reference plants being examined as part of the NUREG-1150 effort by the Nuclear Regulatory Commission. NUREG-1150 will document the risk of a selected group of nuclear power plants. As part of that work, this report contains the overall core damage frequency estimate for Peach Bottom, Unit 2, and the accompanying plant damage state frequencies. Sensitivity and uncertainty analyses provided additional insights regarding the dominant contributors to the Peach Bottom core damage frequency estimate. The mean core damage frequency at Peach Bottom wasmore » calculated to be 8.2E-6. Station blackout type accidents (loss of all ac power) were found to dominate the overall results. Anticipated Transient Without Scram accidents were also found to be non-negligible contributors. The numerical results are largely driven by common mode failure probability estimates and to some extent, human error. Because of significant data and analysis uncertainties in these two areas (important, for instance, to the most dominant scenario in this study), it is recommended that the results of the uncertainty and sensitivity analyses be considered before any actions are taken based on this analysis.« less

  13. Estimation of biomedical optical properties by simultaneous use of diffuse reflectometry and photothermal radiometry: investigation of light propagation models

    NASA Astrophysics Data System (ADS)

    Fonseca, E. S. R.; de Jesus, M. E. P.

    2007-07-01

    The estimation of optical properties of highly turbid and opaque biological tissue is a difficult task since conventional purely optical methods rapidly loose sensitivity as the mean photon path length decreases. Photothermal methods, such as pulsed or frequency domain photothermal radiometry (FD-PTR), on the other hand, show remarkable sensitivity in experimental conditions that produce very feeble optical signals. Photothermal Radiometry is primarily sensitive to absorption coefficient yielding considerably higher estimation errors on scattering coefficients. Conversely, purely optical methods such as Local Diffuse Reflectance (LDR) depend mainly on the scattering coefficient and yield much better estimates of this parameter. Therefore, at moderate transport albedos, the combination of photothermal and reflectance methods can improve considerably the sensitivity of detection of tissue optical properties. The authors have recently proposed a novel method that combines FD-PTR with LDR, aimed at improving sensitivity on the determination of both optical properties. Signal analysis was performed by global fitting the experimental data to forward models based on Monte-Carlo simulations. Although this approach is accurate, the associated computational burden often limits its use as a forward model. Therefore, the application of analytical models based on the diffusion approximation offers a faster alternative. In this work, we propose the calculation of the diffuse reflectance and the fluence rate profiles under the δ-P I approximation. This approach is known to approximate fluence rate expressions better close to collimated sources and boundaries than the standard diffusion approximation (SDA). We extend this study to the calculation of the diffuse reflectance profiles. The ability of the δ-P I based model to provide good estimates of the absorption, scattering and anisotropy coefficients is tested against Monte-Carlo simulations over a wide range of scattering to absorption ratios. Experimental validation of the proposed method is accomplished by a set of measurements on solid absorbing and scattering phantoms.

  14. Population Trend and Elasticities of Vital Rates for Steller Sea Lions (Eumetopias jubatus) in the Eastern Gulf of Alaska: A New Life-History Table Analysis

    PubMed Central

    Maniscalco, John M.; Springer, Alan M.; Adkison, Milo D.; Parker, Pamela

    2015-01-01

    Steller sea lion (Eumetopias jubatus) numbers are beginning to recover across most of the western distinct population segment following catastrophic declines that began in the 1970s and ended around the turn of the century. This study makes use of contemporary vital rate estimates from a trend-site rookery in the eastern Gulf of Alaska (a sub-region of the western population) in a matrix population model to estimate the trend and strength of the recovery across this region between 2003 and 2013. The modeled population trend was projected into the future based on observed variation in vital rates and a prospective elasticity analysis was conducted to determine future trends and which vital rates pose the greatest threats to recovery. The modeled population grew at a mean rate of 3.5% per yr between 2003 and 2013 and was correlated with census count data from the local rookery and throughout the eastern Gulf of Alaska. If recent vital rate estimates continue with little change, the eastern Gulf of Alaska population could be fully recovered to pre-decline levels within 23 years. With density dependent growth, the population would need another 45 years to fully recover. Elasticity analysis showed that, as expected, population growth rate (λ) was most sensitive to changes in adult survival, less sensitive to changes in juvenile survival, and least sensitive to changes in fecundity. A population decline could be expected with only a 6% decrease in adult survival, whereas a 32% decrease in fecundity would be necessary to bring about a population decline. These results have important implications for population management and suggest current research priorities should be shifted to a greater emphasis on survival rates and causes of mortality. PMID:26488901

  15. A comparison of estimators from self-controlled case series, case-crossover design, and sequence symmetry analysis for pharmacoepidemiological studies.

    PubMed

    Takeuchi, Yoshinori; Shinozaki, Tomohiro; Matsuyama, Yutaka

    2018-01-08

    Despite the frequent use of self-controlled methods in pharmacoepidemiological studies, the factors that may bias the estimates from these methods have not been adequately compared in real-world settings. Here, we comparatively examined the impact of a time-varying confounder and its interactions with time-invariant confounders, time trends in exposures and events, restrictions, and misspecification of risk period durations on the estimators from three self-controlled methods. This study analyzed self-controlled case series (SCCS), case-crossover (CCO) design, and sequence symmetry analysis (SSA) using simulated and actual electronic medical records datasets. We evaluated the performance of the three self-controlled methods in simulated cohorts for the following scenarios: 1) time-invariant confounding with interactions between the confounders, 2) time-invariant and time-varying confounding without interactions, 3) time-invariant and time-varying confounding with interactions among the confounders, 4) time trends in exposures and events, 5) restricted follow-up time based on event occurrence, and 6) patient restriction based on event history. The sensitivity of the estimators to misspecified risk period durations was also evaluated. As a case study, we applied these methods to evaluate the risk of macrolides on liver injury using electronic medical records. In the simulation analysis, time-varying confounding produced bias in the SCCS and CCO design estimates, which aggravated in the presence of interactions between the time-invariant and time-varying confounders. The SCCS estimates were biased by time trends in both exposures and events. Erroneously short risk periods introduced bias to the CCO design estimate, whereas erroneously long risk periods introduced bias to the estimates of all three methods. Restricting the follow-up time led to severe bias in the SSA estimates. The SCCS estimates were sensitive to patient restriction. The case study showed that although macrolide use was significantly associated with increased liver injury occurrence in all methods, the value of the estimates varied. The estimations of the three self-controlled methods depended on various underlying assumptions, and the violation of these assumptions may cause non-negligible bias in the resulting estimates. Pharmacoepidemiologists should select the appropriate self-controlled method based on how well the relevant key assumptions are satisfied with respect to the available data.

  16. Application of Diffusion Tensor Imaging Parameters to Detect Change in Longitudinal Studies in Cerebral Small Vessel Disease

    PubMed Central

    Zeestraten, Eva Anna; Benjamin, Philip; Lambert, Christian; Lawrence, Andrew John; Williams, Owen Alan; Morris, Robin Guy; Barrick, Thomas Richard; Markus, Hugh Stephen

    2016-01-01

    Cerebral small vessel disease (SVD) is the major cause of vascular cognitive impairment, resulting in significant disability and reduced quality of life. Cognitive tests have been shown to be insensitive to change in longitudinal studies and, therefore, sensitive surrogate markers are needed to monitor disease progression and assess treatment effects in clinical trials. Diffusion tensor imaging (DTI) is thought to offer great potential in this regard. Sensitivity of the various parameters that can be derived from DTI is however unknown. We aimed to evaluate the differential sensitivity of DTI markers to detect SVD progression, and to estimate sample sizes required to assess therapeutic interventions aimed at halting decline based on DTI data. We investigated 99 patients with symptomatic SVD, defined as clinical lacunar syndrome with MRI confirmation of a corresponding infarct as well as confluent white matter hyperintensities over a 3 year follow-up period. We evaluated change in DTI histogram parameters using linear mixed effect models and calculated sample size estimates. Over a three-year follow-up period we observed a decline in fractional anisotropy and increase in diffusivity in white matter tissue and most parameters changed significantly. Mean diffusivity peak height was the most sensitive marker for SVD progression as it had the smallest sample size estimate. This suggests disease progression can be monitored sensitively using DTI histogram analysis and confirms DTI’s potential as surrogate marker for SVD. PMID:26808982

  17. Data analysis in emission tomography using emission-count posteriors

    NASA Astrophysics Data System (ADS)

    Sitek, Arkadiusz

    2012-11-01

    A novel approach to the analysis of emission tomography data using the posterior probability of the number of emissions per voxel (emission count) conditioned on acquired tomographic data is explored. The posterior is derived from the prior and the Poisson likelihood of the emission-count data by marginalizing voxel activities. Based on emission-count posteriors, examples of Bayesian analysis including estimation and classification tasks in emission tomography are provided. The application of the method to computer simulations of 2D tomography is demonstrated. In particular, the minimum-mean-square-error point estimator of the emission count is demonstrated. The process of finding this estimator can be considered as a tomographic image reconstruction technique since the estimates of the number of emissions per voxel divided by voxel sensitivities and acquisition time are the estimates of the voxel activities. As an example of a classification task, a hypothesis stating that some region of interest (ROI) emitted at least or at most r-times the number of events in some other ROI is tested. The ROIs are specified by the user. The analysis described in this work provides new quantitative statistical measures that can be used in decision making in diagnostic imaging using emission tomography.

  18. Provision of bednets and water filters to delay HIV-1 progression: cost-effectiveness analysis of a Kenyan multisite study.

    PubMed

    Kern, Eli; Verguet, Stéphane; Yuhas, Krista; Odhiambo, Frederick H; Kahn, James G; Walson, Judd

    2013-08-01

    To estimate the effectiveness, costs and cost-effectiveness of providing long-lasting insecticide-treated nets (LLINs) and point-of-use water filters to antiretroviral therapy (ART)-naïve HIV-infected adults and their family members, in the context of a multisite study in Kenya of 589 HIV-positive adults followed on average for 1.7 years. The effectiveness, costs and cost-effectiveness of the intervention were estimated using an epidemiologic-cost model. Model epidemiologic inputs were derived from the Kenya multisite study data, local epidemiological data and from the published literature. Model cost inputs were derived from published literature specific to Kenya. Uncertainty in the model estimates was assessed through univariate and multivariate sensitivity analyses. We estimated net cost savings of about US$ 26 000 for the intervention, over 1.7 years. Even when ignoring net cost savings, the intervention was found to be very cost-effective at a cost of US$ 3100 per death averted or US$ 99 per disability-adjusted life year (DALY) averted. The findings were robust to the sensitivity analysis and remained most sensitive to both the duration of ART use and the cost of ART per person-year. The provision of LLINs and water filters to ART-naïve HIV-infected adults in the Kenyan study resulted in substantial net cost savings, due to the delay in the initiation of ART. The addition of an LLIN and a point-of-use water filter to the existing package of care provided to ART-naïve HIV-infected adults could bring substantial cost savings to resource-constrained health systems in low- and middle-income countries. © 2013 Blackwell Publishing Ltd.

  19. Clinical Benefits, Costs, and Cost-Effectiveness of Neonatal Intensive Care in Mexico

    PubMed Central

    Profit, Jochen; Lee, Diana; Zupancic, John A.; Papile, LuAnn; Gutierrez, Cristina; Goldie, Sue J.; Gonzalez-Pier, Eduardo; Salomon, Joshua A.

    2010-01-01

    Background Neonatal intensive care improves survival, but is associated with high costs and disability amongst survivors. Recent health reform in Mexico launched a new subsidized insurance program, necessitating informed choices on the different interventions that might be covered by the program, including neonatal intensive care. The purpose of this study was to estimate the clinical outcomes, costs, and cost-effectiveness of neonatal intensive care in Mexico. Methods and Findings A cost-effectiveness analysis was conducted using a decision analytic model of health and economic outcomes following preterm birth. Model parameters governing health outcomes were estimated from Mexican vital registration and hospital discharge databases, supplemented with meta-analyses and systematic reviews from the published literature. Costs were estimated on the basis of data provided by the Ministry of Health in Mexico and World Health Organization price lists, supplemented with published studies from other countries as needed. The model estimated changes in clinical outcomes, life expectancy, disability-free life expectancy, lifetime costs, disability-adjusted life years (DALYs), and incremental cost-effectiveness ratios (ICERs) for neonatal intensive care compared to no intensive care. Uncertainty around the results was characterized using one-way sensitivity analyses and a multivariate probabilistic sensitivity analysis. In the base-case analysis, neonatal intensive care for infants born at 24–26, 27–29, and 30–33 weeks gestational age prolonged life expectancy by 28, 43, and 34 years and averted 9, 15, and 12 DALYs, at incremental costs per infant of US$11,400, US$9,500, and US$3,000, respectively, compared to an alternative of no intensive care. The ICERs of neonatal intensive care at 24–26, 27–29, and 30–33 weeks were US$1,200, US$650, and US$240, per DALY averted, respectively. The findings were robust to variation in parameter values over wide ranges in sensitivity analyses. Conclusions Incremental cost-effectiveness ratios for neonatal intensive care imply very high value for money on the basis of conventional benchmarks for cost-effectiveness analysis. Please see later in the article for the Editors' Summary PMID:21179496

  20. Setting Priorities in Behavioral Interventions: An Application to Reducing Phishing Risk.

    PubMed

    Canfield, Casey Inez; Fischhoff, Baruch

    2018-04-01

    Phishing risk is a growing area of concern for corporations, governments, and individuals. Given the evidence that users vary widely in their vulnerability to phishing attacks, we demonstrate an approach for assessing the benefits and costs of interventions that target the most vulnerable users. Our approach uses Monte Carlo simulation to (1) identify which users were most vulnerable, in signal detection theory terms; (2) assess the proportion of system-level risk attributable to the most vulnerable users; (3) estimate the monetary benefit and cost of behavioral interventions targeting different vulnerability levels; and (4) evaluate the sensitivity of these results to whether the attacks involve random or spear phishing. Using parameter estimates from previous research, we find that the most vulnerable users were less cautious and less able to distinguish between phishing and legitimate emails (positive response bias and low sensitivity, in signal detection theory terms). They also accounted for a large share of phishing risk for both random and spear phishing attacks. Under these conditions, our analysis estimates much greater net benefit for behavioral interventions that target these vulnerable users. Within the range of the model's assumptions, there was generally net benefit even for the least vulnerable users. However, the differences in the return on investment for interventions with users with different degrees of vulnerability indicate the importance of measuring that performance, and letting it guide interventions. This study suggests that interventions to reduce response bias, rather than to increase sensitivity, have greater net benefit. © 2017 Society for Risk Analysis.

  1. Cost analysis of school-based intermittent screening and treatment of malaria in Kenya

    PubMed Central

    2011-01-01

    Background The control of malaria in schools is receiving increasing attention, but there remains currently no consensus as to the optimal intervention strategy. This paper analyses the costs of intermittent screening and treatment (IST) of malaria in schools, implemented as part of a cluster-randomized controlled trial on the Kenyan coast. Methods Financial and economic costs were estimated using an ingredients approach whereby all resources required in the delivery of IST are quantified and valued. Sensitivity analysis was conducted to investigate how programme variation affects costs and to identify potential cost savings in the future implementation of IST. Results The estimated financial cost of IST per child screened is US$ 6.61 (economic cost US$ 6.24). Key contributors to cost were salary costs (36%) and malaria rapid diagnostic tests (RDT) (22%). Almost half (47%) of the intervention cost comprises redeployment of existing resources including health worker time and use of hospital vehicles. Sensitivity analysis identified changes to intervention delivery that can reduce programme costs by 40%, including use of alternative RDTs and removal of supervised treatment. Cost-effectiveness is also likely to be highly sensitive to the proportion of children found to be RDT-positive. Conclusion In the current context, school-based IST is a relatively expensive malaria intervention, but reducing the complexity of delivery can result in considerable savings in the cost of intervention. (Costs are reported in US$ 2010). PMID:21933376

  2. A hybrid Bayesian hierarchical model combining cohort and case-control studies for meta-analysis of diagnostic tests: Accounting for partial verification bias.

    PubMed

    Ma, Xiaoye; Chen, Yong; Cole, Stephen R; Chu, Haitao

    2016-12-01

    To account for between-study heterogeneity in meta-analysis of diagnostic accuracy studies, bivariate random effects models have been recommended to jointly model the sensitivities and specificities. As study design and population vary, the definition of disease status or severity could differ across studies. Consequently, sensitivity and specificity may be correlated with disease prevalence. To account for this dependence, a trivariate random effects model had been proposed. However, the proposed approach can only include cohort studies with information estimating study-specific disease prevalence. In addition, some diagnostic accuracy studies only select a subset of samples to be verified by the reference test. It is known that ignoring unverified subjects may lead to partial verification bias in the estimation of prevalence, sensitivities, and specificities in a single study. However, the impact of this bias on a meta-analysis has not been investigated. In this paper, we propose a novel hybrid Bayesian hierarchical model combining cohort and case-control studies and correcting partial verification bias at the same time. We investigate the performance of the proposed methods through a set of simulation studies. Two case studies on assessing the diagnostic accuracy of gadolinium-enhanced magnetic resonance imaging in detecting lymph node metastases and of adrenal fluorine-18 fluorodeoxyglucose positron emission tomography in characterizing adrenal masses are presented. © The Author(s) 2014.

  3. A Hybrid Bayesian Hierarchical Model Combining Cohort and Case-control Studies for Meta-analysis of Diagnostic Tests: Accounting for Partial Verification Bias

    PubMed Central

    Ma, Xiaoye; Chen, Yong; Cole, Stephen R.; Chu, Haitao

    2014-01-01

    To account for between-study heterogeneity in meta-analysis of diagnostic accuracy studies, bivariate random effects models have been recommended to jointly model the sensitivities and specificities. As study design and population vary, the definition of disease status or severity could differ across studies. Consequently, sensitivity and specificity may be correlated with disease prevalence. To account for this dependence, a trivariate random effects model had been proposed. However, the proposed approach can only include cohort studies with information estimating study-specific disease prevalence. In addition, some diagnostic accuracy studies only select a subset of samples to be verified by the reference test. It is known that ignoring unverified subjects may lead to partial verification bias in the estimation of prevalence, sensitivities and specificities in a single study. However, the impact of this bias on a meta-analysis has not been investigated. In this paper, we propose a novel hybrid Bayesian hierarchical model combining cohort and case-control studies and correcting partial verification bias at the same time. We investigate the performance of the proposed methods through a set of simulation studies. Two case studies on assessing the diagnostic accuracy of gadolinium-enhanced magnetic resonance imaging in detecting lymph node metastases and of adrenal fluorine-18 fluorodeoxyglucose positron emission tomography in characterizing adrenal masses are presented. PMID:24862512

  4. Does bioelectrical impedance analysis accurately estimate the condition of threatened and endangered desert fish species?

    USGS Publications Warehouse

    Dibble, Kimberly L.; Yard, Micheal D.; Ward, David L.; Yackulic, Charles B.

    2017-01-01

    Bioelectrical impedance analysis (BIA) is a nonlethal tool with which to estimate the physiological condition of animals that has potential value in research on endangered species. However, the effectiveness of BIA varies by species, the methodology continues to be refined, and incidental mortality rates are unknown. Under laboratory conditions we tested the value of using BIA in addition to morphological measurements such as total length and wet mass to estimate proximate composition (lipid, protein, ash, water, dry mass, energy density) in the endangered Humpback Chub Gila cypha and Bonytail G. elegans and the species of concern Roundtail Chub G. robusta and conducted separate trials to estimate the mortality rates of these sensitive species. Although Humpback and Roundtail Chub exhibited no or low mortality in response to taking BIA measurements versus handling for length and wet-mass measurements, Bonytails exhibited 14% and 47% mortality in the BIA and handling experiments, respectively, indicating that survival following stress is species specific. Derived BIA measurements were included in the best models for most proximate components; however, the added value of BIA as a predictor was marginal except in the absence of accurate wet-mass data. Bioelectrical impedance analysis improved the R2 of the best percentage-based models by no more than 4% relative to models based on morphology. Simulated field conditions indicated that BIA models became increasingly better than morphometric models at estimating proximate composition as the observation error around wet-mass measurements increased. However, since the overall proportion of variance explained by percentage-based models was low and BIA was mostly a redundant predictor, we caution against the use of BIA in field applications for these sensitive fish species.

  5. Variation in Estimated Ozone-Related Health Impacts of Climate Change due to Modeling Choices and Assumptions

    PubMed Central

    Post, Ellen S.; Grambsch, Anne; Weaver, Chris; Morefield, Philip; Leung, Lai-Yung; Nolte, Christopher G.; Adams, Peter; Liang, Xin-Zhong; Zhu, Jin-Hong; Mahoney, Hardee

    2012-01-01

    Background: Future climate change may cause air quality degradation via climate-induced changes in meteorology, atmospheric chemistry, and emissions into the air. Few studies have explicitly modeled the potential relationships between climate change, air quality, and human health, and fewer still have investigated the sensitivity of estimates to the underlying modeling choices. Objectives: Our goal was to assess the sensitivity of estimated ozone-related human health impacts of climate change to key modeling choices. Methods: Our analysis included seven modeling systems in which a climate change model is linked to an air quality model, five population projections, and multiple concentration–response functions. Using the U.S. Environmental Protection Agency’s (EPA’s) Environmental Benefits Mapping and Analysis Program (BenMAP), we estimated future ozone (O3)-related health effects in the United States attributable to simulated climate change between the years 2000 and approximately 2050, given each combination of modeling choices. Health effects and concentration–response functions were chosen to match those used in the U.S. EPA’s 2008 Regulatory Impact Analysis of the National Ambient Air Quality Standards for O3. Results: Different combinations of methodological choices produced a range of estimates of national O3-related mortality from roughly 600 deaths avoided as a result of climate change to 2,500 deaths attributable to climate change (although the large majority produced increases in mortality). The choice of the climate change and the air quality model reflected the greatest source of uncertainty, with the other modeling choices having lesser but still substantial effects. Conclusions: Our results highlight the need to use an ensemble approach, instead of relying on any one set of modeling choices, to assess the potential risks associated with O3-related human health effects resulting from climate change. PMID:22796531

  6. Setting up a proper power spectral density (PSD) and autocorrelation analysis for material and process characterization

    NASA Astrophysics Data System (ADS)

    Rutigliani, Vito; Lorusso, Gian Francesco; De Simone, Danilo; Lazzarino, Frederic; Rispens, Gijsbert; Papavieros, George; Gogolides, Evangelos; Constantoudis, Vassilios; Mack, Chris A.

    2018-03-01

    Power spectral density (PSD) analysis is playing more and more a critical role in the understanding of line-edge roughness (LER) and linewidth roughness (LWR) in a variety of applications across the industry. It is an essential step to get an unbiased LWR estimate, as well as an extremely useful tool for process and material characterization. However, PSD estimate can be affected by both random to systematic artifacts caused by image acquisition and measurement settings, which could irremediably alter its information content. In this paper, we report on the impact of various setting parameters (smoothing image processing filters, pixel size, and SEM noise levels) on the PSD estimate. We discuss also the use of PSD analysis tool in a variety of cases. Looking beyond the basic roughness estimate, we use PSD and autocorrelation analysis to characterize resist blur[1], as well as low and high frequency roughness contents and we apply this technique to guide the EUV material stack selection. Our results clearly indicate that, if properly used, PSD methodology is a very sensitive tool to investigate material and process variations

  7. An Economic Evaluation of Food Safety Education Interventions: Estimates and Critical Data Gaps.

    PubMed

    Zan, Hua; Lambea, Maria; McDowell, Joyce; Scharff, Robert L

    2017-08-01

    The economic evaluation of food safety interventions is an important tool that practitioners and policy makers use to assess the efficacy of their efforts. These evaluations are built on models that are dependent on accurate estimation of numerous input variables. In many cases, however, there is no data available to determine input values and expert opinion is used to generate estimates. This study uses a benefit-cost analysis of the food safety component of the adult Expanded Food and Nutrition Education Program (EFNEP) in Ohio as a vehicle for demonstrating how results based on variable values that are not objectively determined may be sensitive to alternative assumptions. In particular, the focus here is on how reported behavioral change is translated into economic benefits. Current gaps in the literature make it impossible to know with certainty how many people are protected by the education (what are the spillover effects?), the length of time education remains effective, and the level of risk reduction from change in behavior. Based on EFNEP survey data, food safety education led 37.4% of participants to improve their food safety behaviors. Under reasonable default assumptions, benefits from this improvement significantly outweigh costs, yielding a benefit-cost ratio of between 6.2 and 10.0. Incorporation of a sensitivity analysis using alternative estimates yields a greater range of estimates (0.2 to 56.3), which highlights the importance of future research aimed at filling these research gaps. Nevertheless, most reasonable assumptions lead to estimates of benefits that justify their costs.

  8. Mixed kernel function support vector regression for global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng

    2017-11-01

    Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.

  9. Notes on a New Coherence Estimator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bickel, Douglas L.

    This document discusses some interesting features of the new coherence estimator in [1] . The estimator is d erived from a slightly different viewpoint. We discuss a few properties of the estimator, including presenting the probability density function of the denominator of the new estimator , which is a new feature of this estimator . Finally, we present an appr oximate equation for analysis of the sensitivity of the estimator to the knowledge of the noise value. ACKNOWLEDGEMENTS The preparation of this report is the result of an unfunded research and development activity. Sandia National Laboratories is a multi -more » program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE - AC04 - 94AL85000.« less

  10. Competing for Quality: An Analysis of the Comparability of Public School Teacher Salaries to Earning Opportunities in Other Occupations. Occasional Papers in Educational Policy Analysis. Paper No. 415.

    ERIC Educational Resources Information Center

    Bird, Ronald E.

    This paper describes an initial effort to provide a carefully reasoned, factually based, systematic analysis of teacher pay in comparison to pay in other occupations available to college-educated workers. It also reports on the sensitivity of these salary comparison estimates to differences in certain characteristics of the labor force, such as…

  11. Physician and patient willingness to pay for electronic cardiovascular disease management.

    PubMed

    Deal, Ken; Keshavjee, Karim; Troyan, Sue; Kyba, Robert; Holbrook, Anne Marie

    2014-07-01

    Cardiovascular disease (CVD) is an important target for electronic decision support. We examined the potential sustainability of an electronic CVD management program using a discrete choice experiment (DCE). Our objective was to estimate physician and patient willingness-to-pay (WTP) for the current and enhanced programs. Focus groups, expert input and literature searches decided the attributes to be evaluated for the physician and patient DCEs, which were carried out using a Web-based program. Hierarchical Bayes analysis estimated preference coefficients for each respondent and latent class analysis segmented each sample. Simulations were used to estimate WTP for each of the attributes individually and for an enhanced vascular management system. 144 participants (70 physicians, 74 patients) completed the DCE. Overall, access speed to updated records and monthly payments for a nurse coordinator were the main determinants of physician choices. Two distinctly different segments of physicians were identified - one very sensitive to monthly subscription fee and speed of updating the tracker with new patient data and the other very sensitive to the monthly cost of the nurse coordinator and government billing incentives. Patient choices were most significantly influenced by the yearly subscription cost. The estimated physician WTP was slightly above the estimated threshold for sustainability while the patient WTP was below. Current willingness to pay for electronic cardiovascular disease management should encourage innovation to provide economies of scale in program development, delivery and maintenance to meet sustainability thresholds. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  12. Augmented Cross-Sectional Studies with Abbreviated Follow-up for Estimating HIV Incidence

    PubMed Central

    Claggett, B.; Lagakos, S.W.; Wang, R.

    2011-01-01

    Summary Cross-sectional HIV incidence estimation based on a sensitive and less-sensitive test offers great advantages over the traditional cohort study. However, its use has been limited due to concerns about the false negative rate of the less-sensitive test, reflecting the phenomenon that some subjects may remain negative permanently on the less-sensitive test. Wang and Lagakos (2010) propose an augmented cross-sectional design which provides one way to estimate the size of the infected population who remain negative permanently and subsequently incorporate this information in the cross-sectional incidence estimator. In an augmented cross-sectional study, subjects who test negative on the less-sensitive test in the cross-sectional survey are followed forward for transition into the nonrecent state, at which time they would test positive on the less-sensitive test. However, considerable uncertainty exists regarding the appropriate length of follow-up and the size of the infected population who remain nonreactive permanently to the less-sensitive test. In this paper, we assess the impact of varying follow-up time on the resulting incidence estimators from an augmented cross-sectional study, evaluate the robustness of cross-sectional estimators to assumptions about the existence and the size of the subpopulation who will remain negative permanently, and propose a new estimator based on abbreviated follow-up time (AF). Compared to the original estimator from an augmented cross-sectional study, the AF Estimator allows shorter follow-up time and does not require estimation of the mean window period, defined as the average time between detectability of HIV infection with the sensitive and less-sensitive tests. It is shown to perform well in a wide range of settings. We discuss when the AF Estimator would be expected to perform well and offer design considerations for an augmented cross-sectional study with abbreviated follow-up. PMID:21668904

  13. Augmented cross-sectional studies with abbreviated follow-up for estimating HIV incidence.

    PubMed

    Claggett, B; Lagakos, S W; Wang, R

    2012-03-01

    Cross-sectional HIV incidence estimation based on a sensitive and less-sensitive test offers great advantages over the traditional cohort study. However, its use has been limited due to concerns about the false negative rate of the less-sensitive test, reflecting the phenomenon that some subjects may remain negative permanently on the less-sensitive test. Wang and Lagakos (2010, Biometrics 66, 864-874) propose an augmented cross-sectional design that provides one way to estimate the size of the infected population who remain negative permanently and subsequently incorporate this information in the cross-sectional incidence estimator. In an augmented cross-sectional study, subjects who test negative on the less-sensitive test in the cross-sectional survey are followed forward for transition into the nonrecent state, at which time they would test positive on the less-sensitive test. However, considerable uncertainty exists regarding the appropriate length of follow-up and the size of the infected population who remain nonreactive permanently to the less-sensitive test. In this article, we assess the impact of varying follow-up time on the resulting incidence estimators from an augmented cross-sectional study, evaluate the robustness of cross-sectional estimators to assumptions about the existence and the size of the subpopulation who will remain negative permanently, and propose a new estimator based on abbreviated follow-up time (AF). Compared to the original estimator from an augmented cross-sectional study, the AF estimator allows shorter follow-up time and does not require estimation of the mean window period, defined as the average time between detectability of HIV infection with the sensitive and less-sensitive tests. It is shown to perform well in a wide range of settings. We discuss when the AF estimator would be expected to perform well and offer design considerations for an augmented cross-sectional study with abbreviated follow-up. © 2011, The International Biometric Society.

  14. Influence of ECG sampling rate in fetal heart rate variability analysis.

    PubMed

    De Jonckheere, J; Garabedian, C; Charlier, P; Champion, C; Servan-Schreiber, E; Storme, L; Debarge, V; Jeanne, M; Logier, R

    2017-07-01

    Fetal hypoxia results in a fetal blood acidosis (pH<;7.10). In such a situation, the fetus develops several adaptation mechanisms regulated by the autonomic nervous system. Many studies demonstrated significant changes in heart rate variability in hypoxic fetuses. So, fetal heart rate variability analysis could be of precious help for fetal hypoxia prediction. Commonly used fetal heart rate variability analysis methods have been shown to be sensitive to the ECG signal sampling rate. Indeed, a low sampling rate could induce variability in the heart beat detection which will alter the heart rate variability estimation. In this paper, we introduce an original fetal heart rate variability analysis method. We hypothesize that this method will be less sensitive to ECG sampling frequency changes than common heart rate variability analysis methods. We then compared the results of this new heart rate variability analysis method with two different sampling frequencies (250-1000 Hz).

  15. A cost simulation for mammography examinations taking into account equipment failures and resource utilization characteristics.

    PubMed

    Coelli, Fernando C; Almeida, Renan M V R; Pereira, Wagner C A

    2010-12-01

    This work develops a cost analysis estimation for a mammography clinic, taking into account resource utilization and equipment failure rates. Two standard clinic models were simulated, the first with one mammography equipment, two technicians and one doctor, and the second (based on an actually functioning clinic) with two equipments, three technicians and one doctor. Cost data and model parameters were obtained by direct measurements, literature reviews and other hospital data. A discrete-event simulation model was developed, in order to estimate the unit cost (total costs/number of examinations in a defined period) of mammography examinations at those clinics. The cost analysis considered simulated changes in resource utilization rates and in examination failure probabilities (failures on the image acquisition system). In addition, a sensitivity analysis was performed, taking into account changes in the probabilities of equipment failure types. For the two clinic configurations, the estimated mammography unit costs were, respectively, US$ 41.31 and US$ 53.46 in the absence of examination failures. As the examination failures increased up to 10% of total examinations, unit costs approached US$ 54.53 and US$ 53.95, respectively. The sensitivity analysis showed that type 3 (the most serious) failure increases had a very large impact on the patient attendance, up to the point of actually making attendance unfeasible. Discrete-event simulation allowed for the definition of the more efficient clinic, contingent on the expected prevalence of resource utilization and equipment failures. © 2010 Blackwell Publishing Ltd.

  16. Budget impact analysis of trastuzumab in early breast cancer: a hospital district perspective.

    PubMed

    Purmonen, Timo T; Auvinen, Päivi K; Martikainen, Janne A

    2010-04-01

    Adjuvant trastuzumab is widely used in HER2-positive (HER2+) early breast cancer, and despite its cost-effectiveness, it causes substantial costs for health care. The purpose of the study was to develop a tool for estimating the budget impact of new cancer treatments. With this tool, we were able to estimate the budget impact of adjuvant trastuzumab, as well as the probability of staying within a given budget constraint. The created model-based evaluation tool was used to explore the budget impact of trastuzumab in early breast cancer in a single Finnish hospital district with 250,000 inhabitants. The used model took into account the number of patients, HER2+ prevalence, length and cost of treatment, and the effectiveness of the therapy. Probabilistic sensitivity analysis and alternative case scenarios were performed to ensure the robustness of the results. Introduction of adjuvant trastuzumab caused substantial costs for a relatively small hospital district. In base-case analysis the 4-year net budget impact was 1.3 million euro. The trastuzumab acquisition costs were partially offset by the reduction in costs associated with the treatment of cancer recurrence and metastatic disease. Budget impact analyses provide important information about the overall economic impact of new treatments, and thus offer complementary information to cost-effectiveness analyses. Inclusion of treatment outcomes and probabilistic sensitivity analysis provides more realistic estimates of the net budget impact. The length of trastuzumab treatment has a strong effect on the budget impact.

  17. Sex Estimation From Sternal Measurements Using Multidetector Computed Tomography

    PubMed Central

    Ekizoglu, Oguzhan; Hocaoglu, Elif; Inci, Ercan; Bilgili, Mustafa Gokhan; Solmaz, Dilek; Erdil, Irem; Can, Ismail Ozgur

    2014-01-01

    Abstract We aimed to show the utility and reliability of sternal morphometric analysis for sex estimation. Sex estimation is a very important step in forensic identification. Skeletal surveys are main methods for sex estimation studies. Morphometric analysis of sternum may provide high accuracy rated data in sex discrimination. In this study, morphometric analysis of sternum was evaluated in 1 mm chest computed tomography scans for sex estimation. Four hundred forty 3 subjects (202 female, 241 male, mean age: 44 ± 8.1 [distribution: 30–60 year old]) were included the study. Manubrium length (ML), mesosternum length (2L), Sternebra 1 (S1W), and Sternebra 3 (S3W) width were measured and also sternal index (SI) was calculated. Differences between genders were evaluated by student t-test. Predictive factors of sex were determined by discrimination analysis and receiver operating characteristic (ROC) analysis. Male sternal measurement values are significantly higher than females (P < 0.001) while SI is significantly low in males (P < 0.001). In discrimination analysis, MSL has high accuracy rate with 80.2% in females and 80.9% in males. MSL also has the best sensitivity (75.9%) and specificity (87.6%) values. Accuracy rates were above 80% in 3 stepwise discrimination analysis for both sexes. Stepwise 1 (ML, MSL, S1W, S3W) has the highest accuracy rate in stepwise discrimination analysis with 86.1% in females and 83.8% in males. Our study showed that morphometric computed tomography analysis of sternum might provide important information for sex estimation. PMID:25501090

  18. Comparison of Estimates between Cohort and Case-Control Studies in Meta-Analyses of Therapeutic Interventions: A Meta-Epidemiological Study.

    PubMed

    Lanza, Amy; Ravaud, Philippe; Riveros, Carolina; Dechartres, Agnes

    2016-01-01

    Observational studies are increasingly being used for assessing therapeutic interventions. Case-control studies are generally considered to have greater risk of bias than cohort studies, but we lack evidence of differences in effect estimates between the 2 study types. We aimed to compare estimates between cohort and case-control studies in meta-analyses of observational studies of therapeutic interventions by using a meta-epidemiological study. We used a random sample of meta-analyses of therapeutic interventions published in 2013 that included both cohort and case-control studies assessing a binary outcome. For each meta-analysis, the ratio of estimates (RE) was calculated by comparing the estimate in case-control studies to that in cohort studies. Then, we used random-effects meta-analysis to estimate a combined RE across meta-analyses. An RE < 1 indicated that case-control studies yielded larger estimates than cohort studies. The final analysis included 23 meta-analyses: 138 cohort and 133 case-control studies. Treatment effect estimates did not significantly differ between case-control and cohort studies (combined RE 0.97 [95% CI 0.86-1.09]). Heterogeneity was low, with between-meta-analysis variance τ2 = 0.0049. Estimates did not differ between case-control and prospective or retrospective cohort studies (RE = 1.05 [95% CI 0.96-1.15] and RE = 0.99 [95% CI, 0.83-1.19], respectively). Sensitivity analysis of studies reporting adjusted estimates also revealed no significant difference (RE = 1.03 [95% CI 0.91-1.16]). Heterogeneity was also low for these analyses. We found no significant difference in treatment effect estimates between case-control and cohort studies assessing therapeutic interventions.

  19. Present and future of prophylactic antibiotics for severe acute pancreatitis

    PubMed Central

    Jiang, Kun; Huang, Wei; Yang, Xiao-Nan; Xia, Qing

    2012-01-01

    AIM: To investigate the role of prophylactic antibiotics in the reduction of mortality of severe acute pancreatitis (SAP) patients, which is highly questioned by more and more randomized controlled trials (RCTs) and meta-analyses. METHODS: An updated meta-analysis was performed. RCTs comparing prophylactic antibiotics for SAP with control or placebo were included for meta-analysis. The mortality outcomes were pooled for estimation, and re-pooled estimation was performed by the sensitivity analysis of an ideal large-scale RCT. RESULTS: Currently available 11 RCTs were included. Subgroup analysis showed that there was significant reduction of mortality rate in the period before 2000, while no significant reduction in the period from 2000 [Risk Ratio, (RR) = 1.01, P = 0.98]. Funnel plot indicated that there might be apparent publication bias in the period before 2000. Sensitivity analysis showed that the RR of mortality rate ranged from 0.77 to 1.00 with a relatively narrow confidence interval (P < 0.05). However, the number needed to treat having a minor lower limit of the range (7-5096 patients) implied that certain SAP patients could still potentially prevent death by antibiotic prophylaxis. CONCLUSION: Current evidences do not support prophylactic antibiotics as a routine treatment for SAP, but the potentially benefited sub-population requires further investigations. PMID:22294832

  20. Design analysis of an MPI human functional brain scanner

    PubMed Central

    Mason, Erica E.; Cooley, Clarissa Z.; Cauley, Stephen F.; Griswold, Mark A.; Conolly, Steven M.; Wald, Lawrence L.

    2017-01-01

    MPI’s high sensitivity makes it a promising modality for imaging brain function. Functional contrast is proposed based on blood SPION concentration changes due to Cerebral Blood Volume (CBV) increases during activation, a mechanism utilized in fMRI studies. MPI offers the potential for a direct and more sensitive measure of SPION concentration, and thus CBV, than fMRI. As such, fMPI could surpass fMRI in sensitivity, enhancing the scientific and clinical value of functional imaging. As human-sized MPI systems have not been attempted, we assess the technical challenges of scaling MPI from rodent to human brain. We use a full-system MPI simulator to test arbitrary hardware designs and encoding practices, and we examine tradeoffs imposed by constraints that arise when scaling to human size as well as safety constraints (PNS and central nervous system stimulation) not considered in animal scanners, thereby estimating spatial resolutions and sensitivities achievable with current technology. Using a projection FFL MPI system, we examine coil hardware options and their implications for sensitivity and spatial resolution. We estimate that an fMPI brain scanner is feasible, although with reduced sensitivity (20×) and spatial resolution (5×) compared to existing rodent systems. Nonetheless, it retains sufficient sensitivity and spatial resolution to make it an attractive future instrument for studying the human brain; additional technical innovations can result in further improvements. PMID:28752130

  1. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis version 6.0 theory manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.« less

  2. Applying causal mediation analysis to personality disorder research.

    PubMed

    Walters, Glenn D

    2018-01-01

    This article is designed to address fundamental issues in the application of causal mediation analysis to research on personality disorders. Causal mediation analysis is used to identify mechanisms of effect by testing variables as putative links between the independent and dependent variables. As such, it would appear to have relevance to personality disorder research. It is argued that proper implementation of causal mediation analysis requires that investigators take several factors into account. These factors are discussed under 5 headings: variable selection, model specification, significance evaluation, effect size estimation, and sensitivity testing. First, care must be taken when selecting the independent, dependent, mediator, and control variables for a mediation analysis. Some variables make better mediators than others and all variables should be based on reasonably reliable indicators. Second, the mediation model needs to be properly specified. This requires that the data for the analysis be prospectively or historically ordered and possess proper causal direction. Third, it is imperative that the significance of the identified pathways be established, preferably with a nonparametric bootstrap resampling approach. Fourth, effect size estimates should be computed or competing pathways compared. Finally, investigators employing the mediation method are advised to perform a sensitivity analysis. Additional topics covered in this article include parallel and serial multiple mediation designs, moderation, and the relationship between mediation and moderation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  3. A cost-benefit analysis on the specialization in departments of obstetrics and gynecology in Japan.

    PubMed

    Shen, Junyi; Fukui, On; Hashimoto, Hiroyuki; Nakashima, Takako; Kimura, Tadashi; Morishige, Kenichiro; Saijo, Tatsuyoshi

    2012-03-27

    In April 2008, the specialization in departments of obstetrics and gynecology was conducted in Sennan area of Osaka prefecture in Japan, which aims at solving the problems of regional provision of obstetrical service. Under this specialization, the departments of obstetrics and gynecology in two city hospitals were combined as one medical center, whilst one hospital is in charge of the department of gynecology and the other one operates the department of obstetrics. In this paper, we implement a cost-benefit analysis to evaluate the validity of this specialization. The benefit-cost ratio is estimated at 1.367 under a basic scenario, indicating that the specialization can generate a net benefit. In addition, with a consideration of different kinds of uncertainty in the future, a number of sensitivity analyses are conducted. The results of these sensitivity analyses suggest that the specialization is valid in the sense that all the estimated benefit-cost ratios are above 1.0 in any case.

  4. FDG-PET, CT, MRI for diagnosis of local residual or recurrent nasopharyngeal carcinoma, which one is the best? A systematic review.

    PubMed

    Liu, Tao; Xu, Wen; Yan, Wei-Li; Ye, Ming; Bai, Yong-Rui; Huang, Gang

    2007-12-01

    To perform a systematic review to compare FDG-PET, CT, and MRI imaging for diagnosis of local residual or recurrent nasopharyngeal carcinoma. MEDLINE, EMBASE, the CBMdisc databases and some other databases were searched for relevant original articles published from January 1990 to June 2007. Inclusion criteria were as follows: Articles were reported in English or Chinese; FDG-PET, CT, or MRI was used to detect local residual or recurrent nasopharyngeal carcinoma; histopathologic analysis and/or close clinical and imaging follow-up for at least 6 months were the reference standard. Two reviewers independently extracted data. A software called "Meta-DiSc" was used to obtain pooled estimates of sensitivity, specificity, diagnostic odds ratio (DOR), summary receiver operating characteristic (SROC) curves, and the Q* index. Twenty-one articles fulfilled all inclusion criteria. The pooled sensitivity estimates for PET (95%) were significantly higher than CT (76%) (P<0.001) and MRI (78%) (P<0.001). The pooled specificity estimates for PET (90%) were significantly higher than CT (59%) (P<0.001) and MRI (76%) (P<0.001). The pooled DOR estimates for PET (96.51) were significantly higher than CT (7.01) (P<0.001) and MRI (8.68) (P<0.001). SROC curve for FDG-PET showed better diagnostic accuracy than CT and MRI. The Q* index for PET (0.92) was significantly higher than CT (0.72) (P<0.001) and MRI (0.76) (P<0.01). For PET, the sensitivity and diagnostic OR for using qualitative analysis were significantly higher than using both qualitative and quantitative analyses (P<0.01). For CT, the sensitivity, specificity, diagnostic OR, and the Q* index for dual-section helical and multi-section helical were all significantly higher than nonhelical and single-section helical (P<0.01). And the sensitivity for 'section thickness <5 mm' was significantly lower than ' =5 mm' (P<0.01), while the specificity was significantly higher (P<0.01). For MRI, there were no significant differences found between magnetic field strength <1.5 and > or =1.5 T (P>0.05). FDG-PET was the best modality for diagnosis of local residual or recurrent nasopharyngeal carcinoma. The type of analysis for PET imaging and the section thickness for CT would affect the diagnostic results. Dual-section helical and multi-section helical CT were better than nonhelical and single-section helical CT.

  5. In vivo sensitivity estimation and imaging acceleration with rotating RF coil arrays at 7 Tesla.

    PubMed

    Li, Mingyan; Jin, Jin; Zuo, Zhentao; Liu, Feng; Trakic, Adnan; Weber, Ewald; Zhuo, Yan; Xue, Rong; Crozier, Stuart

    2015-03-01

    Using a new rotating SENSitivity Encoding (rotating-SENSE) algorithm, we have successfully demonstrated that the rotating radiofrequency coil array (RRFCA) was capable of achieving a significant reduction in scan time and a uniform image reconstruction for a homogeneous phantom at 7 Tesla. However, at 7 Tesla the in vivo sensitivity profiles (B1(-)) become distinct at various angular positions. Therefore, sensitivity maps at other angular positions cannot be obtained by numerically rotating the acquired ones. In this work, a novel sensitivity estimation method for the RRFCA was developed and validated with human brain imaging. This method employed a library database and registration techniques to estimate coil sensitivity at an arbitrary angular position. The estimated sensitivity maps were then compared to the acquired sensitivity maps. The results indicate that the proposed method is capable of accurately estimating both magnitude and phase of sensitivity at an arbitrary angular position, which enables us to employ the rotating-SENSE algorithm to accelerate acquisition and reconstruct image. Compared to a stationary coil array with the same number of coil elements, the RRFCA was able to reconstruct images with better quality at a high reduction factor. It is hoped that the proposed rotation-dependent sensitivity estimation algorithm and the acceleration ability of the RRFCA will be particularly useful for ultra high field MRI. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. In vivo sensitivity estimation and imaging acceleration with rotating RF coil arrays at 7 Tesla

    NASA Astrophysics Data System (ADS)

    Li, Mingyan; Jin, Jin; Zuo, Zhentao; Liu, Feng; Trakic, Adnan; Weber, Ewald; Zhuo, Yan; Xue, Rong; Crozier, Stuart

    2015-03-01

    Using a new rotating SENSitivity Encoding (rotating-SENSE) algorithm, we have successfully demonstrated that the rotating radiofrequency coil array (RRFCA) was capable of achieving a significant reduction in scan time and a uniform image reconstruction for a homogeneous phantom at 7 Tesla. However, at 7 Tesla the in vivo sensitivity profiles (B1-) become distinct at various angular positions. Therefore, sensitivity maps at other angular positions cannot be obtained by numerically rotating the acquired ones. In this work, a novel sensitivity estimation method for the RRFCA was developed and validated with human brain imaging. This method employed a library database and registration techniques to estimate coil sensitivity at an arbitrary angular position. The estimated sensitivity maps were then compared to the acquired sensitivity maps. The results indicate that the proposed method is capable of accurately estimating both magnitude and phase of sensitivity at an arbitrary angular position, which enables us to employ the rotating-SENSE algorithm to accelerate acquisition and reconstruct image. Compared to a stationary coil array with the same number of coil elements, the RRFCA was able to reconstruct images with better quality at a high reduction factor. It is hoped that the proposed rotation-dependent sensitivity estimation algorithm and the acceleration ability of the RRFCA will be particularly useful for ultra high field MRI.

  7. Aeroelastic Modeling of X-56A Stiff-Wing Configuration Flight Test Data

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Boucher, Matthew J.

    2017-01-01

    Aeroelastic stability and control derivatives for the X-56A Multi-Utility Technology Testbed (MUTT), in the stiff-wing configuration, were estimated from flight test data using the output-error method. Practical aspects of the analysis are discussed. The orthogonal phase-optimized multisine inputs provided excellent data information for aeroelastic modeling. Consistent parameter estimates were determined using output error in both the frequency and time domains. The frequency domain analysis converged faster and was less sensitive to starting values for the model parameters, which was useful for determining the aeroelastic model structure and obtaining starting values for the time domain analysis. Including a modal description of the structure from a finite element model reduced the complexity of the estimation problem and improved the modeling results. Effects of reducing the model order on the short period stability and control derivatives were investigated.

  8. Detection of Echinococcus multilocularis by MC-PCR: evaluation of diagnostic sensitivity and specificity without gold standard

    PubMed Central

    Wahlström, Helene; Comin, Arianna; Isaksson, Mats; Deplazes, Peter

    2016-01-01

    Introduction A semi-automated magnetic capture probe-based DNA extraction and real-time PCR method (MC-PCR), allowing for a more efficient large-scale surveillance of Echinococcus multilocularis occurrence, has been developed. The test sensitivity has previously been evaluated using the sedimentation and counting technique (SCT) as a gold standard. However, as the sensitivity of the SCT is not 1, test characteristics of the MC-PCR was also evaluated using latent class analysis, a methodology not requiring a gold standard. Materials and methods Test results, MC-PCR and SCT, from a previous evaluation of the MC-PCR using 177 foxes shot in the spring (n=108) and autumn 2012 (n=69) in high prevalence areas in Switzerland were used. Latent class analysis was used to estimate the test characteristics of the MC-PCR. Although it is not the primary aim of this study, estimates of the test characteristics of the SCT were also obtained. Results and discussion This study showed that the sensitivity of the MC-PCR was 0.88 [95% posterior credible interval (PCI) 0.80–0.93], which was not significantly different than the SCT, 0.83 (95% PCI 0.76–0.88), which is currently considered as the gold standard. The specificity of both tests was high, 0.98 (95% PCI 0.94–0.99) for the MC-PCR and 0.99 (95% PCI 0.99–1) for the SCT. In a previous study, using fox scats from a low prevalence area, the specificity of the MC-PCR was higher, 0.999% (95% PCI 0.997–1). One reason for the lower estimate of the specificity in this study could be that the MC-PCR detects DNA from infected but non-infectious rodents eaten by foxes. When using MC-PCR in low prevalence areas or areas free from the parasite, a positive result in the MC-PCR should be regarded as a true positive. Conclusion The sensitivity of the MC-PCR (0.88) was comparable to the sensitivity of SCT (0.83). PMID:26968153

  9. Atmospheric pressure loading parameters from very long baseline interferometry observations

    NASA Technical Reports Server (NTRS)

    Macmillan, D. S.; Gipson, John M.

    1994-01-01

    Atmospheric mass loading produces a primarily vertical displacement of the Earth's crust. This displacement is correlated with surface pressure and is large enough to be detected by very long baseline interferometry (VLBI) measurements. Using the measured surface pressure at VLBI stations, we have estimated the atmospheric loading term for each station location directly from VLBI data acquired from 1979 to 1992. Our estimates of the vertical sensitivity to change in pressure range from 0 to -0.6 mm/mbar depending on the station. These estimates agree with inverted barometer model calculations (Manabe et al., 1991; vanDam and Herring, 1994) of the vertical displacement sensitivity computed by convolving actual pressure distributions with loading Green's functions. The pressure sensitivity tends to be smaller for stations near the coast, which is consistent with the inverted barometer hypothesis. Applying this estimated pressure loading correction in standard VLBI geodetic analysis improves the repeatability of estimated lengths of 25 out of 37 baselines that were measured at least 50 times. In a root-sum-square (rss) sense, the improvement generally increases with baseline length at a rate of about 0.3 to 0.6 ppb depending on whether the baseline stations are close to the coast. For the 5998-km baseline from Westford, Massachusetts, to Wettzell, Germany, the rss improvement is about 3.6 mm out of 11.0 mm. The average rss reduction of the vertical scatter for inland stations ranges from 2.7 to 5.4 mm.

  10. Assessing operating characteristics of CAD algorithms in the absence of a gold standard

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roy Choudhury, Kingshuk; Paik, David S.; Yi, Chin A.

    2010-04-15

    Purpose: The authors examine potential bias when using a reference reader panel as ''gold standard'' for estimating operating characteristics of CAD algorithms for detecting lesions. As an alternative, the authors propose latent class analysis (LCA), which does not require an external gold standard to evaluate diagnostic accuracy. Methods: A binomial model for multiple reader detections using different diagnostic protocols was constructed, assuming conditional independence of readings given true lesion status. Operating characteristics of all protocols were estimated by maximum likelihood LCA. Reader panel and LCA based estimates were compared using data simulated from the binomial model for a range ofmore » operating characteristics. LCA was applied to 36 thin section thoracic computed tomography data sets from the Lung Image Database Consortium (LIDC): Free search markings of four radiologists were compared to markings from four different CAD assisted radiologists. For real data, bootstrap-based resampling methods, which accommodate dependence in reader detections, are proposed to test of hypotheses of differences between detection protocols. Results: In simulation studies, reader panel based sensitivity estimates had an average relative bias (ARB) of -23% to -27%, significantly higher (p-value <0.0001) than LCA (ARB -2% to -6%). Specificity was well estimated by both reader panel (ARB -0.6% to -0.5%) and LCA (ARB 1.4%-0.5%). Among 1145 lesion candidates LIDC considered, LCA estimated sensitivity of reference readers (55%) was significantly lower (p-value 0.006) than CAD assisted readers' (68%). Average false positives per patient for reference readers (0.95) was not significantly lower (p-value 0.28) than CAD assisted readers' (1.27). Conclusions: Whereas a gold standard based on a consensus of readers may substantially bias sensitivity estimates, LCA may be a significantly more accurate and consistent means for evaluating diagnostic accuracy.« less

  11. An investigation of new methods for estimating parameter sensitivities

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1989-01-01

    The method proposed for estimating sensitivity derivatives is based on the Recursive Quadratic Programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This method is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RQP algorithm. Initial testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity.

  12. Postoperative complications following colectomy for ulcerative colitis: A validation study

    PubMed Central

    2012-01-01

    Background Ulcerative colitis (UC) patients failing medical management require colectomy. This study compares risk estimates for predictors of postoperative complication derived from administrative data against that of chart review and evaluates the accuracy of administrative coding for this population. Methods Hospital administrative databases were used to identify adults with UC undergoing colectomy from 1996–2007. Medical charts were reviewed and regression analyses comparing chart versus administrative data were performed to assess the effect of age, emergent operation, and Charlson comorbidities on the occurrence of postoperative complications. Sensitivity, specificity, and positive/negative predictive values of administrative coding for identifying the study population, Charlson comorbidities, and postoperative complications were assessed. Results Compared to chart review, administrative data estimated a higher magnitude of effect for emergent admission (OR 2.52 [95% CI: 1.80–3.52] versus 1.49 [1.06–2.09]) and Charlson comorbidities (OR 2.91 [1.86–4.56] versus 1.50 [1.05–2.15]) as predictors of postoperative complications. Administrative data correctly identified UC and colectomy in 85.9% of cases. The administrative database was 37% sensitive in identifying patients with ≥ 1Charlson comorbidity. Restricting analysis to active comorbidities increased the sensitivity to 63%. The sensitivity of identifying patients with at least one postoperative complication was 68%; restricting analysis to more severe complications improved the sensitivity to 84%. Conclusions Administrative data identified the same risk factors for postoperative complications as chart review, but overestimated the magnitude of risk. This discrepancy may be explained by coding inaccuracies that selectively identifying the most serious complications and comorbidities. PMID:22943760

  13. Dynamic State Estimation and Parameter Calibration of DFIG based on Ensemble Kalman Filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, Rui; Huang, Zhenyu; Wang, Shaobu

    2015-07-30

    With the growing interest in the application of wind energy, doubly fed induction generator (DFIG) plays an essential role in the industry nowadays. To deal with the increasing stochastic variations introduced by intermittent wind resource and responsive loads, dynamic state estimation (DSE) are introduced in any power system associated with DFIGs. However, sometimes this dynamic analysis canould not work because the parameters of DFIGs are not accurate enough. To solve the problem, an ensemble Kalman filter (EnKF) method is proposed for the state estimation and parameter calibration tasks. In this paper, a DFIG is modeled and implemented with the EnKFmore » method. Sensitivity analysis is demonstrated regarding the measurement noise, initial state errors and parameter errors. The results indicate this EnKF method has a robust performance on the state estimation and parameter calibration of DFIGs.« less

  14. An activity-based methodology for operations cost analysis

    NASA Technical Reports Server (NTRS)

    Korsmeyer, David; Bilby, Curt; Frizzell, R. A.

    1991-01-01

    This report describes an activity-based cost estimation method, proposed for the Space Exploration Initiative (SEI), as an alternative to NASA's traditional mass-based cost estimation method. A case study demonstrates how the activity-based cost estimation technique can be used to identify the operations that have a significant impact on costs over the life cycle of the SEI. The case study yielded an operations cost of $101 billion for the 20-year span of the lunar surface operations for the Option 5a program architecture. In addition, the results indicated that the support and training costs for the missions were the greatest contributors to the annual cost estimates. A cost-sensitivity analysis of the cultural and architectural drivers determined that the length of training and the amount of support associated with the ground support personnel for mission activities are the most significant cost contributors.

  15. Input-variable sensitivity assessment for sediment transport relations

    NASA Astrophysics Data System (ADS)

    Fernández, Roberto; Garcia, Marcelo H.

    2017-09-01

    A methodology to assess input-variable sensitivity for sediment transport relations is presented. The Mean Value First Order Second Moment Method (MVFOSM) is applied to two bed load transport equations showing that it may be used to rank all input variables in terms of how their specific variance affects the overall variance of the sediment transport estimation. In sites where data are scarce or nonexistent, the results obtained may be used to (i) determine what variables would have the largest impact when estimating sediment loads in the absence of field observations and (ii) design field campaigns to specifically measure those variables for which a given transport equation is most sensitive; in sites where data are readily available, the results would allow quantifying the effect that the variance associated with each input variable has on the variance of the sediment transport estimates. An application of the method to two transport relations using data from a tropical mountain river in Costa Rica is implemented to exemplify the potential of the method in places where input data are limited. Results are compared against Monte Carlo simulations to assess the reliability of the method and validate its results. For both of the sediment transport relations used in the sensitivity analysis, accurate knowledge of sediment size was found to have more impact on sediment transport predictions than precise knowledge of other input variables such as channel slope and flow discharge.

  16. Sensitivity of transitions in internal rotor molecules to a possible variation of the proton-to-electron mass ratio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jansen, Paul; Ubachs, Wim; Bethlem, Hendrick L.

    2011-12-15

    Recently, methanol was identified as a sensitive target system to probe variations of the proton-to-electron mass ratio {mu}[Jansen et al., Phys. Rev. Lett. 106, 100801 (2011)]. The high sensitivity of methanol originates from the interplay between overall rotation and hindered internal rotation of the molecule; that is, transitions that convert internal rotation energy into overall rotation energy, or vice versa, have an enhanced sensitivity coefficient, K{sub {mu}}. As internal rotation is a common phenomenon in polyatomic molecules, it is likely that other molecules display similar or even larger effects. In this paper we generalize the concepts that form the foundationmore » of the high sensitivity in methanol and use this to construct an approximate model which makes it possible to estimate the sensitivities of transitions in internal rotor molecules with C{sub 3v} symmetry, without performing a full calculation of energy levels. We find that a reliable estimate of transition sensitivities can be obtained from the three rotational constants (A, B, and C) and three torsional constants (F, V{sub 3}, and {rho}). This model is verified by comparing obtained sensitivities for methanol, acetaldehyde, acetamide, methyl formate, and acetic acid with a full analysis of the molecular Hamiltonian. Of the molecules considered, methanol is by far the most suitable candidate for laboratory and cosmological tests searching for a possible variation of {mu}.« less

  17. Estimating sensitivity and specificity for technology assessment based on observer studies.

    PubMed

    Nishikawa, Robert M; Pesce, Lorenzo L

    2013-07-01

    The goal of this study was to determine the accuracy and precision of using scores from a receiver operating characteristic rating scale to estimate sensitivity and specificity. We used data collected in a previous study that measured the improvements in radiologists' ability to classify mammographic microcalcification clusters as benign or malignant with and without the use of a computer-aided diagnosis scheme. Sensitivity and specificity were estimated from the rating data from a question that directly asked the radiologists their biopsy recommendations, which was used as the "truth," because it is the actual recall decision, thus it is their subjective truth. By thresholding the rating data, sensitivity and specificity were estimated for different threshold values. Because of interreader and intrareader variability, estimated sensitivity and specificity values for individual readers could be as much as 100% in error when using rating data compared to using the biopsy recommendation data. When pooled together, the estimates using thresholding the rating data were in good agreement with sensitivity and specificity estimated from the recommendation data. However, the statistical power of the rating data estimates was lower. By simply asking the observer his or her explicit recommendation (eg, biopsy or no biopsy), sensitivity and specificity can be measured directly, giving a more accurate description of empirical variability and the power of the study can be maximized. Copyright © 2013 AUR. Published by Elsevier Inc. All rights reserved.

  18. Loop transfer recovery for general nonminimum phase discrete time systems. I - Analysis

    NASA Technical Reports Server (NTRS)

    Chen, Ben M.; Saberi, Ali; Sannuti, Peddapullaiah; Shamash, Yacov

    1992-01-01

    A complete analysis of loop transfer recovery (LTR) for general nonstrictly proper, not necessarily minimum phase discrete time systems is presented. Three different observer-based controllers, namely, `prediction estimator' and full or reduced-order type `current estimator' based controllers, are used. The analysis corresponding to all these three controllers is unified into a single mathematical framework. The LTR analysis given here focuses on three fundamental issues: (1) the recoverability of a target loop when it is arbitrarily given, (2) the recoverability of a target loop while taking into account its specific characteristics, and (3) the establishment of necessary and sufficient conditions on the given system so that it has at least one recoverable target loop transfer function or sensitivity function. Various differences that arise in LTR analysis of continuous and discrete systems are pointed out.

  19. Soil Moisture Content Estimation Based on Sentinel-1 and Auxiliary Earth Observation Products. A Hydrological Approach

    PubMed Central

    Alexakis, Dimitrios D.; Mexis, Filippos-Dimitrios K.; Vozinaki, Anthi-Eirini K.; Daliakopoulos, Ioannis N.; Tsanis, Ioannis K.

    2017-01-01

    A methodology for elaborating multi-temporal Sentinel-1 and Landsat 8 satellite images for estimating topsoil Soil Moisture Content (SMC) to support hydrological simulation studies is proposed. After pre-processing the remote sensing data, backscattering coefficient, Normalized Difference Vegetation Index (NDVI), thermal infrared temperature and incidence angle parameters are assessed for their potential to infer ground measurements of SMC, collected at the top 5 cm. A non-linear approach using Artificial Neural Networks (ANNs) is tested. The methodology is applied in Western Crete, Greece, where a SMC gauge network was deployed during 2015. The performance of the proposed algorithm is evaluated using leave-one-out cross validation and sensitivity analysis. ANNs prove to be the most efficient in SMC estimation yielding R2 values between 0.7 and 0.9. The proposed methodology is used to support a hydrological simulation with the HEC-HMS model, applied at the Keramianos basin which is ungauged for SMC. Results and model sensitivity highlight the contribution of combining Sentinel-1 SAR and Landsat 8 images for improving SMC estimates and supporting hydrological studies. PMID:28635625

  20. Soil Moisture Content Estimation Based on Sentinel-1 and Auxiliary Earth Observation Products. A Hydrological Approach.

    PubMed

    Alexakis, Dimitrios D; Mexis, Filippos-Dimitrios K; Vozinaki, Anthi-Eirini K; Daliakopoulos, Ioannis N; Tsanis, Ioannis K

    2017-06-21

    A methodology for elaborating multi-temporal Sentinel-1 and Landsat 8 satellite images for estimating topsoil Soil Moisture Content (SMC) to support hydrological simulation studies is proposed. After pre-processing the remote sensing data, backscattering coefficient, Normalized Difference Vegetation Index (NDVI), thermal infrared temperature and incidence angle parameters are assessed for their potential to infer ground measurements of SMC, collected at the top 5 cm. A non-linear approach using Artificial Neural Networks (ANNs) is tested. The methodology is applied in Western Crete, Greece, where a SMC gauge network was deployed during 2015. The performance of the proposed algorithm is evaluated using leave-one-out cross validation and sensitivity analysis. ANNs prove to be the most efficient in SMC estimation yielding R² values between 0.7 and 0.9. The proposed methodology is used to support a hydrological simulation with the HEC-HMS model, applied at the Keramianos basin which is ungauged for SMC. Results and model sensitivity highlight the contribution of combining Sentinel-1 SAR and Landsat 8 images for improving SMC estimates and supporting hydrological studies.

  1. Cost-effectiveness analysis of trastuzumab in the adjuvant setting for treatment of HER2-positive breast cancer.

    PubMed

    Garrison, Louis P; Lubeck, Deborah; Lalla, Deepa; Paton, Virginia; Dueck, Amylou; Perez, Edith A

    2007-08-01

    Adding trastuzumab to adjuvant chemotherapy provides significant clinical benefit in patients with human epidermal growth factor receptor 2 (HER2)-positive breast cancer. A cost-effectiveness analysis was performed to assess clinical and economic implications of adding trastuzumab to adjuvant chemotherapy, based upon joint analysis of NSABP B-31 and NCCTG N9831 trials. A Markov model with 4 health states was used to estimate the cost utility for a 50-year-old woman on the basis of trial results through 4 years and estimates of long-term recurrence and death based on a meta-analysis of trials. From 6 years onward, rates of recurrence and death were assumed to be the same in both trastuzumab and chemotherapy-only arms. Incremental costs were estimated for diagnostic and treatment-related costs. Analyses were from payer and societal perspectives, and these analyses were projected to lifetime and 20-year horizons. Over a lifetime, the projected cost of trastuzumab per quality-adjusted life year (QALY; discount rate 3%) gained was 26,417 dollars (range 9,104 dollars-69,340 dollars under multiway sensitivity analysis). Discounted incremental lifetime cost was 44,923 dollars, and projected life expectancy was 3 years longer for patients who received trastuzumab (19.4 years vs 16.4 years). During a 20-year horizon, the projected cost of adding trastuzumab to chemotherapy was 34,201 dollars per QALY gained. Key cost-effectiveness drivers were discount rate, trastuzumab price, and probability of metastasis. The cost-effectiveness result was robust to sensitivity analysis. Trastuzumab for adjuvant treatment of early stage breast cancer was projected to be cost effective over a lifetime horizon, achieving a cost-effectiveness ratio below that of many widely accepted oncology treatments. (c) 2007 American Cancer Society.

  2. The role of driving factors in historical and projected carbon dynamics of upland ecosystems in Alaska.

    PubMed

    Genet, Hélène; He, Yujie; Lyu, Zhou; McGuire, A David; Zhuang, Qianlai; Clein, Joy; D'Amore, David; Bennett, Alec; Breen, Amy; Biles, Frances; Euskirchen, Eugénie S; Johnson, Kristofer; Kurkowski, Tom; Kushch Schroder, Svetlana; Pastick, Neal; Rupp, T Scott; Wylie, Bruce; Zhang, Yujin; Zhou, Xiaoping; Zhu, Zhiliang

    2018-01-01

    It is important to understand how upland ecosystems of Alaska, which are estimated to occupy 84% of the state (i.e., 1,237,774 km 2 ), are influencing and will influence state-wide carbon (C) dynamics in the face of ongoing climate change. We coupled fire disturbance and biogeochemical models to assess the relative effects of changing atmospheric carbon dioxide (CO 2 ), climate, logging and fire regimes on the historical and future C balance of upland ecosystems for the four main Landscape Conservation Cooperatives (LCCs) of Alaska. At the end of the historical period (1950-2009) of our analysis, we estimate that upland ecosystems of Alaska store ~50 Pg C (with ~90% of the C in soils), and gained 3.26 Tg C/yr. Three of the LCCs had gains in total ecosystem C storage, while the Northwest Boreal LCC lost C (-6.01 Tg C/yr) because of increases in fire activity. Carbon exports from logging affected only the North Pacific LCC and represented less than 1% of the state's net primary production (NPP). The analysis for the future time period (2010-2099) consisted of six simulations driven by climate outputs from two climate models for three emission scenarios. Across the climate scenarios, total ecosystem C storage increased between 19.5 and 66.3 Tg C/yr, which represents 3.4% to 11.7% increase in Alaska upland's storage. We conducted additional simulations to attribute these responses to environmental changes. This analysis showed that atmospheric CO 2 fertilization was the main driver of ecosystem C balance. By comparing future simulations with constant and with increasing atmospheric CO 2 , we estimated that the sensitivity of NPP was 4.8% per 100 ppmv, but NPP becomes less sensitive to CO 2 increase throughout the 21st century. Overall, our analyses suggest that the decreasing CO 2 sensitivity of NPP and the increasing sensitivity of heterotrophic respiration to air temperature, in addition to the increase in C loss from wildfires weakens the C sink from upland ecosystems of Alaska and will ultimately lead to a source of CO 2 to the atmosphere beyond 2100. Therefore, we conclude that the increasing regional C sink we estimate for the 21st century will most likely be transitional. © 2017 by the Ecological Society of America.

  3. The role of driving factors in historical and projected carbon dynamics of upland ecosystems in Alaska

    USGS Publications Warehouse

    Genet, Hélène; He, Yujie; Lyu, Zhou; McGuire, A. David; Zhuang, Qianlai; Clein, Joy S.; D'Amore, David; Bennett, Alec; Breen, Amy; Biles, Frances; Euskirchen, Eugénie S.; Johnson, Kristofer; Kurkowski, Tom; Schroder, Svetlana (Kushch); Pastick, Neal J.; Rupp, T. Scott; Wylie, Bruce K.; Zhang, Yujin; Zhou, Xiaoping; Zhu, Zhiliang

    2018-01-01

    It is important to understand how upland ecosystems of Alaska, which are estimated to occupy 84% of the state (i.e., 1,237,774 km2), are influencing and will influence state‐wide carbon (C) dynamics in the face of ongoing climate change. We coupled fire disturbance and biogeochemical models to assess the relative effects of changing atmospheric carbon dioxide (CO2), climate, logging and fire regimes on the historical and future C balance of upland ecosystems for the four main Landscape Conservation Cooperatives (LCCs) of Alaska. At the end of the historical period (1950–2009) of our analysis, we estimate that upland ecosystems of Alaska store ~50 Pg C (with ~90% of the C in soils), and gained 3.26 Tg C/yr. Three of the LCCs had gains in total ecosystem C storage, while the Northwest Boreal LCC lost C (−6.01 Tg C/yr) because of increases in fire activity. Carbon exports from logging affected only the North Pacific LCC and represented less than 1% of the state's net primary production (NPP). The analysis for the future time period (2010–2099) consisted of six simulations driven by climate outputs from two climate models for three emission scenarios. Across the climate scenarios, total ecosystem C storage increased between 19.5 and 66.3 Tg C/yr, which represents 3.4% to 11.7% increase in Alaska upland's storage. We conducted additional simulations to attribute these responses to environmental changes. This analysis showed that atmospheric CO2 fertilization was the main driver of ecosystem C balance. By comparing future simulations with constant and with increasing atmospheric CO2, we estimated that the sensitivity of NPP was 4.8% per 100 ppmv, but NPP becomes less sensitive to CO2increase throughout the 21st century. Overall, our analyses suggest that the decreasing CO2 sensitivity of NPP and the increasing sensitivity of heterotrophic respiration to air temperature, in addition to the increase in C loss from wildfires weakens the C sink from upland ecosystems of Alaska and will ultimately lead to a source of CO2 to the atmosphere beyond 2100. Therefore, we conclude that the increasing regional C sink we estimate for the 21st century will most likely be transitional.

  4. A preliminary benefit-cost study of a Sandia wind farm.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ehlen, Mark Andrew; Griffin, Taylor; Loose, Verne W.

    In response to federal mandates and incentives for renewable energy, Sandia National Laboratories conducted a feasibility study of installing an on-site wind farm on Sandia National Laboratories and Kirtland Air Force Base property. This report describes this preliminary analysis of the costs and benefits of installing and operating a 15-turbine, 30-MW-capacity wind farm that delivers an estimated 16 percent of 2010 onsite demand. The report first describes market and non-market economic costs and benefits associated with operating a wind farm, and then uses a standard life-cycle costing and benefit-cost framework to estimate the costs and benefits of a wind farm.more » Based on these 'best-estimates' of costs and benefits and on factor, uncertainty and sensitivity analysis, the analysis results suggest that the benefits of a Sandia wind farm are greater than its costs. The analysis techniques used herein are applicable to the economic assessment of most if not all forms of renewable energy.« less

  5. Surrogate models for efficient stability analysis of brake systems

    NASA Astrophysics Data System (ADS)

    Nechak, Lyes; Gillot, Frédéric; Besset, Sébastien; Sinou, Jean-Jacques

    2015-07-01

    This study assesses capacities of the global sensitivity analysis combined together with the kriging formalism to be useful in the robust stability analysis of brake systems, which is too costly when performed with the classical complex eigenvalues analysis (CEA) based on finite element models (FEMs). By considering a simplified brake system, the global sensitivity analysis is first shown very helpful for understanding the effects of design parameters on the brake system's stability. This is allowed by the so-called Sobol indices which discriminate design parameters with respect to their influence on the stability. Consequently, only uncertainty of influent parameters is taken into account in the following step, namely, the surrogate modelling based on kriging. The latter is then demonstrated to be an interesting alternative to FEMs since it allowed, with a lower cost, an accurate estimation of the system's proportions of instability corresponding to the influent parameters.

  6. Cross-well slug testing in unconfined aquifers: A case study from the Sleepers River Watershed, Vermont

    USGS Publications Warehouse

    Belitz, K.; Dripps, W.

    1999-01-01

    Normally, slug test measurements are limited to the well in which the water level is perturbed. Consequently, it is often difficult to obtain reliable estimates of hydraulic properties, particularly if the aquifer is anisotropic or if there is a wellbore skin. In this investigation, we use partially penetrating stress and observation wells to evaluate specific storage, radial hydraulic conductivity and anisotropy of the aquifer, and the hydraulic conductivity of the borehole skin. The study site is located in the W9 subbasin of the Sleepers River Research Watershed, Vermont. At the site, ~3 m of saturated till are partially penetrated by a stress well located in the center of the unconfined aquifer and six observation wells located above, below, and at the depth of the stress well at radial distances of 1.2 and 2.4 m. The observation wells were shut in with inflatable packers. The semianalytical solution of Butler (1995) was used to conduct a sensitivity analysis and to interpret slug test results. The sensitivity analysis indicates that the response of the stress well is primarily sensitive to radial hydraulic conductivity, less sensitive to anisotropy and the conductivity of the borehole skin, and nearly insensitive to specific storage. In contrast, the responses of the observation wells are sensitive to all four parameters. Interpretation of the field data was facilitated by generating type curves in a manner analogous to the method of Cooper et al. (1967). Because the value of radial hydraulic conductivity is obtained from a match point, the number of unknowns is reduced to three. The estimated values of radial hydraulic conductivity and specific storage are comparable to those derived from the methods of Bouwer and Rice (1976) and Cooper et al. (1967). The values and skin conductivity, however, could not have been obtained without the use of observation wells.Normally, slug test measurements are limited to the well in which the water level is perturbed. Consequently, it is often difficult to obtain reliable estimates of hydraulic properties, particularly if the aquifer is anisotropic or if there is a wellbore skin. In this investigation, we use partially penetrating stress and observation wells to evaluate specific storage, radial hydraulic conductivity and anisotropy of the aquifer, and the hydraulic conductivity of the borehole skin. The study site is located in the W9 subbasin of the Sleepers River Research Watershed, Vermont. At the site, approximately 3 m of saturated till are partially penetrated by a stress well located in the center of the unconfined aquifer and six observation wells located above, below, and at the depth of the stress well at radial distances of 1.2 and 2.4 m. The observation wells were shut in with inflatable packers. The semianalytical solution of Buffer (1995) was used to conduct a sensitivity analysis and to interpret slug test results. The sensitivity analysis indicates that the response of the stress well is primarily sensitive to radial hydraulic conductivity, less sensitive to anisotropy and the conductivity of the borehole skin, and nearly insensitive to specific storage. In contrast, the responses of the observation wells are sensitive to all four parameters. Interpretation of the field data was facilitated by generating type curves in a manner analogous to the method of Cooper et al. (1967). Because the value of radial hydraulic conductivity is obtained from a match point, the number of unknowns is reduced to three. The estimated values of radial hydraulic conductivity and specific storage are comparable to those derived from the methods of Bouwer and Rice (1976) and Cooper et al. (1967). The values and skin conductivity, however, could not have been obtained without the use of observation wells.

  7. Space system operations and support cost analysis using Markov chains

    NASA Technical Reports Server (NTRS)

    Unal, Resit; Dean, Edwin B.; Moore, Arlene A.; Fairbairn, Robert E.

    1990-01-01

    This paper evaluates the use of Markov chain process in probabilistic life cycle cost analysis and suggests further uses of the process as a design aid tool. A methodology is developed for estimating operations and support cost and expected life for reusable space transportation systems. Application of the methodology is demonstrated for the case of a hypothetical space transportation vehicle. A sensitivity analysis is carried out to explore the effects of uncertainty in key model inputs.

  8. Space transfer vehicle concepts and requirements study. Volume 3, book 1: Program cost estimates

    NASA Technical Reports Server (NTRS)

    Peffley, Al F.

    1991-01-01

    The Space Transfer Vehicle (STV) Concepts and Requirements Study cost estimate and program planning analysis is presented. The cost estimating technique used to support STV system, subsystem, and component cost analysis is a mixture of parametric cost estimating and selective cost analogy approaches. The parametric cost analysis is aimed at developing cost-effective aerobrake, crew module, tank module, and lander designs with the parametric cost estimates data. This is accomplished using cost as a design parameter in an iterative process with conceptual design input information. The parametric estimating approach segregates costs by major program life cycle phase (development, production, integration, and launch support). These phases are further broken out into major hardware subsystems, software functions, and tasks according to the STV preliminary program work breakdown structure (WBS). The WBS is defined to a low enough level of detail by the study team to highlight STV system cost drivers. This level of cost visibility provided the basis for cost sensitivity analysis against various design approaches aimed at achieving a cost-effective design. The cost approach, methodology, and rationale are described. A chronological record of the interim review material relating to cost analysis is included along with a brief summary of the study contract tasks accomplished during that period of review and the key conclusions or observations identified that relate to STV program cost estimates. The STV life cycle costs are estimated on the proprietary parametric cost model (PCM) with inputs organized by a project WBS. Preliminary life cycle schedules are also included.

  9. Application of ADM1 for modeling of biogas production from anaerobic digestion of Hydrilla verticillata.

    PubMed

    Chen, Xiaojuan; Chen, Zhihua; Wang, Xun; Huo, Chan; Hu, Zhiquan; Xiao, Bo; Hu, Mian

    2016-07-01

    The present study focused on the application of anaerobic digestion model no. 1 (ADM1) to simulate biogas production from Hydrilla verticillata. Model simulation was carried out by implementing ADM1 in AQUASIM 2.0 software. Sensitivity analysis was used to select the most sensitive parameters for estimation using the absolute-relative sensitivity function. Among all the kinetic parameters, disintegration constant (kdis), hydrolysis constant of protein (khyd_pr), Monod maximum specific substrate uptake rate (km_aa, km_ac, km_h2) and half-saturation constants (Ks_aa, Ks_ac) affect biogas production significantly, which were optimized by fitting of the model equations to the data obtained from batch experiments. The ADM1 model after parameter estimation was able to well predict the experimental results of daily biogas production and biogas composition. The simulation results of evolution of organic acids, bacteria concentrations and inhibition effects also helped to get insight into the reaction mechanisms. Copyright © 2016. Published by Elsevier Ltd.

  10. Sensitivity analysis to assess the influence of the inertial properties of railway vehicle bodies on the vehicle's dynamic behaviour

    NASA Astrophysics Data System (ADS)

    Suarez, Berta; Felez, Jesus; Maroto, Joaquin; Rodriguez, Pablo

    2013-02-01

    A sensitivity analysis has been performed to assess the influence of the inertial properties of railway vehicles on their dynamic behaviour. To do this, 216 dynamic simulations were performed modifying, one at a time, the masses, moments of inertia and heights of the centre of gravity of the carbody, the bogie and the wheelset. Three values were assigned to each parameter, corresponding to the percentiles 10, 50 and 90 of a data set stored in a database of railway vehicles. After processing the results of these simulations, the analysed parameters were sorted by increasing influence. It was also found which of these parameters could be estimated with a lesser degree of accuracy for future simulations without appreciably affecting the simulation results. In general terms, it was concluded that the most sensitive inertial properties are the mass and the vertical moment of inertia, and the least sensitive ones the longitudinal and lateral moments of inertia.

  11. Relative azimuth inversion by way of damped maximum correlation estimates

    USGS Publications Warehouse

    Ringler, A.T.; Edwards, J.D.; Hutt, C.R.; Shelly, F.

    2012-01-01

    Horizontal seismic data are utilized in a large number of Earth studies. Such work depends on the published orientations of the sensitive axes of seismic sensors relative to true North. These orientations can be estimated using a number of different techniques: SensOrLoc (Sensitivity, Orientation and Location), comparison to synthetics (Ekstrom and Busby, 2008), or by way of magnetic compass. Current methods for finding relative station azimuths are unable to do so with arbitrary precision quickly because of limitations in the algorithms (e.g. grid search methods). Furthermore, in order to determine instrument orientations during station visits, it is critical that any analysis software be easily run on a large number of different computer platforms and the results be obtained quickly while on site. We developed a new technique for estimating relative sensor azimuths by inverting for the orientation with the maximum correlation to a reference instrument, using a non-linear parameter estimation routine. By making use of overlapping windows, we are able to make multiple azimuth estimates, which helps to identify the confidence of our azimuth estimate, even when the signal-to-noise ratio (SNR) is low. Finally, our algorithm has been written as a stand-alone, platform independent, Java software package with a graphical user interface for reading and selecting data segments to be analyzed.

  12. SCALE 6.2 Continuous-Energy TSUNAMI-3D Capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, Christopher M; Rearden, Bradley T

    2015-01-01

    The TSUNAMI (Tools for Sensitivity and UNcertainty Analysis Methodology Implementation) capabilities within the SCALE code system make use of sensitivity coefficients for an extensive number of criticality safety applications, such as quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different systems, quantifying computational biases, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved ease of use and fidelity and the desire to extend TSUNAMI analysis to advanced applications have motivated the development of a SCALE 6.2 module for calculating sensitivity coefficients using three-dimensional (3D) continuous-energy (CE) Montemore » Carlo methods: CE TSUNAMI-3D. This paper provides an overview of the theory, implementation, and capabilities of the CE TSUNAMI-3D sensitivity analysis methods. CE TSUNAMI contains two methods for calculating sensitivity coefficients in eigenvalue sensitivity applications: (1) the Iterated Fission Probability (IFP) method and (2) the Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Track length importance CHaracterization (CLUTCH) method. This work also presents the GEneralized Adjoint Response in Monte Carlo method (GEAR-MC), a first-of-its-kind approach for calculating adjoint-weighted, generalized response sensitivity coefficients—such as flux responses or reaction rate ratios—in CE Monte Carlo applications. The accuracy and efficiency of the CE TSUNAMI-3D eigenvalue sensitivity methods are assessed from a user perspective in a companion publication, and the accuracy and features of the CE TSUNAMI-3D GEAR-MC methods are detailed in this paper.« less

  13. Parametric sensitivity analysis of leachate transport simulations at landfills.

    PubMed

    Bou-Zeid, E; El-Fadel, M

    2004-01-01

    This paper presents a case study in simulating leachate generation and transport at a 2000 ton/day landfill facility and assesses leachate migration away from the landfill in order to control associated environmental impacts, particularly on groundwater wells down gradient of the site. The site offers unique characteristics in that it is a former quarry converted to a landfill and is planned to have refuse depths that could reach 100 m, making it one of the deepest in the world. Leachate quantity and potential percolation into the subsurface are estimated using the Hydrologic Evaluation of Landfill Performance (HELP) model. A three-dimensional subsurface model (PORFLOW) was adopted to simulate ground water flow and contaminant transport away from the site. A comprehensive sensitivity analysis to leachate transport control parameters was also conducted. Sensitivity analysis suggests that changes in partition coefficient, source strength, aquifer hydraulic conductivity, and dispersivity have the most significant impact on model output indicating that these parameters should be carefully selected when similar modeling studies are performed. Copyright 2004 Elsevier Ltd.

  14. Parameterization of the InVEST Crop Pollination Model to spatially predict abundance of wild blueberry (Vaccinium angustifolium Aiton) native bee pollinators in Maine, USA

    USGS Publications Warehouse

    Groff, Shannon C.; Loftin, Cynthia S.; Drummond, Frank; Bushmann, Sara; McGill, Brian J.

    2016-01-01

    Non-native honeybees historically have been managed for crop pollination, however, recent population declines draw attention to pollination services provided by native bees. We applied the InVEST Crop Pollination model, developed to predict native bee abundance from habitat resources, in Maine's wild blueberry crop landscape. We evaluated model performance with parameters informed by four approaches: 1) expert opinion; 2) sensitivity analysis; 3) sensitivity analysis informed model optimization; and, 4) simulated annealing (uninformed) model optimization. Uninformed optimization improved model performance by 29% compared to expert opinion-informed model, while sensitivity-analysis informed optimization improved model performance by 54%. This suggests that expert opinion may not result in the best parameter values for the InVEST model. The proportion of deciduous/mixed forest within 2000 m of a blueberry field also reliably predicted native bee abundance in blueberry fields, however, the InVEST model provides an efficient tool to estimate bee abundance beyond the field perimeter.

  15. The economic burden of schizophrenia in Canada in 2004.

    PubMed

    Goeree, R; Farahati, F; Burke, N; Blackhouse, G; O'Reilly, D; Pyne, J; Tarride, J-E

    2005-12-01

    To estimate the financial burden of schizophrenia in Canada in 2004. A prevalence-based cost-of-illness (COI) approach was used. The primary sources of information for the study included a review of the published literature, a review of published reports and documents, secondary analysis of administrative datasets, and information collected directly from various federal and provincial government programs and services. The literature review included publications up to April 2005 reported in MedLine, EMBASE and PsychINFO. Where specific information from a province was not available, the method of mean substitution from other provinces was used. Costs incurred by various levels/departments of government were separated into healthcare and non-healthcare costs. Also included in the analysis was the value of lost productivity for premature mortality and morbidity associated with schizophrenia. Sensitivity analysis was used to test major cost assumptions used in the analysis. Where possible, all resource utilization estimates for the financial burden of schizophrenia were obtained for 2004 and are expressed in 2004 Canadian dollars (CAN dollars). The estimated number of persons with schizophrenia in Canada in 2004 was 234 305 (95% CI, 136 201-333 402). The direct healthcare and non-healthcare costs were estimated to be 2.02 billion CAN dollars in 2004. There were 374 deaths attributed to schizophrenia. This combined with the high unemployment rate due to schizophrenia resulted in an additional productivity morbidity and mortality loss estimate of 4.83 billion CAN dollars, for a total cost estimate in 2004 of 6.85 billion CAN dollars. By far the largest component of the total cost estimate was for productivity losses associated with morbidity in schizophrenia (70% of total costs) and the results showed that total cost estimates were most sensitive to alternative assumptions regarding the additional unemployment due to schizophrenia in Canada. Despite significant improvements in the past decade in pharmacotherapy, programs and services available for patients with schizophrenia, the economic burden of schizophrenia in Canada remains high. The most significant factor affecting the cost of schizophrenia in Canada is lost productivity due to morbidity. Programs targeted at improving patient symptoms and functioning to increase workforce participation has the potential to make a significant contribution in reducing the cost of this severe mental illness in Canada.

  16. A practical approach to the sensitivity analysis for kinetic Monte Carlo simulation of heterogeneous catalysis

    DOE PAGES

    Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian

    2017-01-31

    Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past the application of sensitivity analysis, such as Degree ofmore » Rate Control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. Here in this study we present an efficient and robust three stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using CO oxidation on RuO 2(110) as a prototypical reaction. In a first step, we utilize the Fisher Information Matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally we adopt a method for sampling coupled finite differences for evaluating the sensitivity measure of lattice based models. This allows efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano scale design of heterogeneous catalysts.« less

  17. A practical approach to the sensitivity analysis for kinetic Monte Carlo simulation of heterogeneous catalysis.

    PubMed

    Hoffmann, Max J; Engelmann, Felix; Matera, Sebastian

    2017-01-28

    Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for the atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past, the application of sensitivity analysis, such as degree of rate control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. In this study, we present an efficient and robust three-stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using the CO oxidation on RuO 2 (110) as a prototypical reaction. In the first step, we utilize the Fisher information matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on the linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally, we adapt a method for sampling coupled finite differences for evaluating the sensitivity measure for lattice based models. This allows for an efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano-scale design of heterogeneous catalysts.

  18. A practical approach to the sensitivity analysis for kinetic Monte Carlo simulation of heterogeneous catalysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian

    Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past the application of sensitivity analysis, such as Degree ofmore » Rate Control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. Here in this study we present an efficient and robust three stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using CO oxidation on RuO 2(110) as a prototypical reaction. In a first step, we utilize the Fisher Information Matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally we adopt a method for sampling coupled finite differences for evaluating the sensitivity measure of lattice based models. This allows efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano scale design of heterogeneous catalysts.« less

  19. A practical approach to the sensitivity analysis for kinetic Monte Carlo simulation of heterogeneous catalysis

    NASA Astrophysics Data System (ADS)

    Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian

    2017-01-01

    Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for the atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past, the application of sensitivity analysis, such as degree of rate control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. In this study, we present an efficient and robust three-stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using the CO oxidation on RuO2(110) as a prototypical reaction. In the first step, we utilize the Fisher information matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on the linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally, we adapt a method for sampling coupled finite differences for evaluating the sensitivity measure for lattice based models. This allows for an efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano-scale design of heterogeneous catalysts.

  20. Sensitivity analysis for dose deposition in radiotherapy via a Fokker–Planck model

    DOE PAGES

    Barnard, Richard C.; Frank, Martin; Krycki, Kai

    2016-02-09

    In this paper, we study the sensitivities of electron dose calculations with respect to stopping power and transport coefficients. We focus on the application to radiotherapy simulations. We use a Fokker–Planck approximation to the Boltzmann transport equation. Equations for the sensitivities are derived by the adjoint method. The Fokker–Planck equation and its adjoint are solved numerically in slab geometry using the spherical harmonics expansion (P N) and an Harten-Lax-van Leer finite volume method. Our method is verified by comparison to finite difference approximations of the sensitivities. Finally, we present numerical results of the sensitivities for the normalized average dose depositionmore » depth with respect to the stopping power and the transport coefficients, demonstrating the increase in relative sensitivities as beam energy decreases. In conclusion, this in turn gives estimates on the uncertainty in the normalized average deposition depth, which we present.« less

  1. Evaluation of Uncertainty and Sensitivity in Environmental Modeling at a Radioactive Waste Management Site

    NASA Astrophysics Data System (ADS)

    Stockton, T. B.; Black, P. K.; Catlett, K. M.; Tauxe, J. D.

    2002-05-01

    Environmental modeling is an essential component in the evaluation of regulatory compliance of radioactive waste management sites (RWMSs) at the Nevada Test Site in southern Nevada, USA. For those sites that are currently operating, further goals are to support integrated decision analysis for the development of acceptance criteria for future wastes, as well as site maintenance, closure, and monitoring. At these RWMSs, the principal pathways for release of contamination to the environment are upward towards the ground surface rather than downwards towards the deep water table. Biotic processes, such as burrow excavation and plant uptake and turnover, dominate this upward transport. A combined multi-pathway contaminant transport and risk assessment model was constructed using the GoldSim modeling platform. This platform facilitates probabilistic analysis of environmental systems, and is especially well suited for assessments involving radionuclide decay chains. The model employs probabilistic definitions of key parameters governing contaminant transport, with the goals of quantifying cumulative uncertainty in the estimation of performance measures and providing information necessary to perform sensitivity analyses. This modeling differs from previous radiological performance assessments (PAs) in that the modeling parameters are intended to be representative of the current knowledge, and the uncertainty in that knowledge, of parameter values rather than reflective of a conservative assessment approach. While a conservative PA may be sufficient to demonstrate regulatory compliance, a parametrically honest PA can also be used for more general site decision-making. In particular, a parametrically honest probabilistic modeling approach allows both uncertainty and sensitivity analyses to be explicitly coupled to the decision framework using a single set of model realizations. For example, sensitivity analysis provides a guide for analyzing the value of collecting more information by quantifying the relative importance of each input parameter in predicting the model response. However, in these complex, high dimensional eco-system models, represented by the RWMS model, the dynamics of the systems can act in a non-linear manner. Quantitatively assessing the importance of input variables becomes more difficult as the dimensionality, the non-linearities, and the non-monotonicities of the model increase. Methods from data mining such as Multivariate Adaptive Regression Splines (MARS) and the Fourier Amplitude Sensitivity Test (FAST) provide tools that can be used in global sensitivity analysis in these high dimensional, non-linear situations. The enhanced interpretability of model output provided by the quantitative measures estimated by these global sensitivity analysis tools will be demonstrated using the RWMS model.

  2. Statistical theory and methodology for remote sensing data analysis with special emphasis on LACIE

    NASA Technical Reports Server (NTRS)

    Odell, P. L.

    1975-01-01

    Crop proportion estimators for determining crop acreage through the use of remote sensing were evaluated. Several studies of these estimators were conducted, including an empirical comparison of the different estimators (using actual data) and an empirical study of the sensitivity (robustness) of the class of mixture estimators. The effect of missing data upon crop classification procedures is discussed in detail including a simulation of the missing data effect. The final problem addressed is that of taking yield data (bushels per acre) gathered at several yield stations and extrapolating these values over some specified large region. Computer programs developed in support of some of these activities are described.

  3. Xpert MTB/RIF Assay for Pulmonary Tuberculosis and Rifampicin Resistance in Children: a Meta-Analysis.

    PubMed

    Wang, X W; Pappoe, F; Huang, Y; Cheng, X W; Xu, D F; Wang, H; Xu, Y H

    2015-01-01

    The Xpert MTB/RIF assay has been recommended by WHO to replace conventional microscopy, culture, and drug resistance tests. It simultaneously detects both Mycobacterium tuberculosis infection (TB) and resistance to rifampicin (RIF) within two hours. The objective was to review the available research studies on the accuracy of the Xpert MTB/RIF assay for diagnosing pulmonary TB and RIF-resistance in children. A comprehensive search of Pubmed and Embase was performed up to October 28, 2014. We identified published articles estimating the diagnostic accuracy of the Xpert MTB/RIF assay in children with or without HIV using culture or culture plus clinical TB as standard reference. QUADAS-2 tool was used to evaluate the quality of the studies. A summary estimation for sensitivity, specificity, diagnostic odds ratios (DOR), and the area under the summary ROC curve (AUC) was performed. Meta-analysis was used to establish the overall accuracy. 11 diagnostic studies with 3801 patients were included in the systematic review. The overall analysis revealed a moderate sensitivity and high specificity of 65% (95% CI: 61 - 69%) and 99% (95% CI: 98 - 99%), respectively, and a pooled diagnostic odds ratio of 164.09 (95% CI: 111.89 - 240.64). The AUC value was found to be 0.94. The pooled sensitivity and specificity for paediatric rifampicin resistance were 94.0% (95% CI: 80.0 - 93.0%) and 99.0% (95% CI: 95.0 - 98.0%), respectively. Hence, the Xpert MTB/RIF assay has good diagnostic and rifampicin performance for paediatric pulmonary tuberculosis. The Xpert MTB/RIF is sensitive and specific for diagnosing paediatric pulmonary TB. It is also effective in detecting rifamnicin resistance. It can, therefore, be used as an initial diagnostic tool.

  4. Thermodynamics-based Metabolite Sensitivity Analysis in metabolic networks.

    PubMed

    Kiparissides, A; Hatzimanikatis, V

    2017-01-01

    The increasing availability of large metabolomics datasets enhances the need for computational methodologies that can organize the data in a way that can lead to the inference of meaningful relationships. Knowledge of the metabolic state of a cell and how it responds to various stimuli and extracellular conditions can offer significant insight in the regulatory functions and how to manipulate them. Constraint based methods, such as Flux Balance Analysis (FBA) and Thermodynamics-based flux analysis (TFA), are commonly used to estimate the flow of metabolites through genome-wide metabolic networks, making it possible to identify the ranges of flux values that are consistent with the studied physiological and thermodynamic conditions. However, unless key intracellular fluxes and metabolite concentrations are known, constraint-based models lead to underdetermined problem formulations. This lack of information propagates as uncertainty in the estimation of fluxes and basic reaction properties such as the determination of reaction directionalities. Therefore, knowledge of which metabolites, if measured, would contribute the most to reducing this uncertainty can significantly improve our ability to define the internal state of the cell. In the present work we combine constraint based modeling, Design of Experiments (DoE) and Global Sensitivity Analysis (GSA) into the Thermodynamics-based Metabolite Sensitivity Analysis (TMSA) method. TMSA ranks metabolites comprising a metabolic network based on their ability to constrain the gamut of possible solutions to a limited, thermodynamically consistent set of internal states. TMSA is modular and can be applied to a single reaction, a metabolic pathway or an entire metabolic network. This is, to our knowledge, the first attempt to use metabolic modeling in order to provide a significance ranking of metabolites to guide experimental measurements. Copyright © 2016 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.

  5. Funding a smoking cessation program for Crohn's disease: an economic evaluation.

    PubMed

    Coward, Stephanie; Heitman, Steven J; Clement, Fiona; Negron, Maria; Panaccione, Remo; Ghosh, Subrata; Barkema, Herman W; Seow, Cynthia; Leung, Yvette P Y; Kaplan, Gilaad G

    2015-03-01

    Patients with Crohn's disease (CD) who smoke are at a higher risk of flaring and requiring surgery. Cost-effectiveness studies of funding smoking cessation programs are lacking. Thus, we performed a cost-utility analysis of funding smoking cessation programs for CD. A cost-utility analysis was performed comparing five smoking cessation strategies: No Program, Counseling, Nicotine Replacement Therapy (NRT), NRT+Counseling, and Varenicline. The time horizon for the Markov model was 5 years. The health states included medical remission (azathioprine or antitumor necrosis factor (anti-TNF), dose escalation of an anti-TNF, second anti-TNF, surgery, and death. Probabilities were taken from peer-reviewed literature, and costs (CAN$) for surgery, medications, and smoking cessation programs were estimated locally. The primary outcome was the cost per quality-adjusted life year (QALY) gained associated with each smoking cessation strategy. Threshold, three-way sensitivity, probabilistic sensitivity analysis (PSA), and budget impact analysis (BIA) were carried out. All strategies dominated No Program. Strategies from most to least cost effective were as follows: Varenicline (cost: $55,614, QALY: 3.70), NRT+Counseling (cost: $58,878, QALY: 3.69), NRT (cost: $59,540, QALY: 3.69), Counseling (cost: $61,029, QALY: 3.68), and No Program (cost: $63,601, QALY: 3.67). Three-way sensitivity analysis demonstrated that No Program was only more cost effective when every strategy's cost exceeded approximately 10 times their estimated costs. The PSA showed that No Program was the most cost-effective <1% of the time. The BIA showed that any strategy saved the health-care system money over No Program. Health-care systems should consider funding smoking cessation programs for CD, as they improve health outcomes and reduce costs.

  6. Rapid Analysis of Nonstructural Carbohydrate Components in Grass Forage Using Microplate Enzymatic Assays

    USDA-ARS?s Scientific Manuscript database

    Measurements of nonstructural carbohydrates (NSC) in plant tissues are important to estimate plant organ resources available for plant growth and stress tolerance or for feed value to grazing animals. A popular commercially available assay kit used to detect glucose with a light sensitive dye reacti...

  7. Sensitivity Analysis of Dispersion Model Results in the NEXUS Health Study Due to Uncertainties in Traffic-Related Emissions Inputs

    EPA Science Inventory

    Dispersion modeling tools have traditionally provided critical information for air quality management decisions, but have been used recently to provide exposure estimates to support health studies. However, these models can be challenging to implement, particularly in near-road s...

  8. Cost Sensitivity Analysis for Consolidated Interim Storage of Spent Fuel: Evaluating the Effect of Economic Environment Parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cumberland, Riley M.; Williams, Kent Alan; Jarrell, Joshua J.

    This report evaluates how the economic environment (i.e., discount rate, inflation rate, escalation rate) can impact previously estimated differences in lifecycle costs between an integrated waste management system with an interim storage facility (ISF) and a similar system without an ISF.

  9. Sensitivity and uncertainty analysis for the annual phosphorus loss estimator model

    USDA-ARS?s Scientific Manuscript database

    Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that there are inherent uncertainties with model predictions, limited studies have addressed model prediction uncertainty. In this study we assess the effect of model input error on predict...

  10. Characterizing the Sensitivity of Groundwater Storage to Climate variation in the Indus Basin

    NASA Astrophysics Data System (ADS)

    Huang, L.; Sabo, J. L.

    2017-12-01

    Indus Basin represents an extensive groundwater aquifer facing the challenge of effective management of limited water resources. Groundwater storage is one of the most important variables of water balance, yet its sensitivity to climate change has rarely been explored. To better estimate present and future groundwater storage and its sensitivity to climate change in the Indus Basin, we analyzed groundwater recharge/discharge and their historical evolution in this basin. Several methods are applied to specify the aquifer system including: water level change and storativity estimates, gravity estimates (GRACE), flow model (MODFLOW), water budget analysis and extrapolation. In addition, all of the socioeconomic and engineering aspects are represented in the hydrological system through the change of temporal and spatial distributions of recharge and discharge (e.g., land use, crop structure, water allocation, etc.). Our results demonstrate that the direct impacts of climate change will result in unevenly distributed but increasing groundwater storage in the short term through groundwater recharge. In contrast, long term groundwater storage will decrease as a result of combined indirect and direct impacts of climate change (e.g. recharge/discharge and human activities). The sensitivity of groundwater storage to climate variation is characterized by topography, aquifer specifics and land use. Furthermore, by comparing possible outcomes of different human interventions scenarios, our study reveals human activities play an important role in affecting the sensitivity of groundwater storage to climate variation. Over all, this study presents the feasibility and value of using integrated hydrological methods to support sustainable water resource management under climate change.

  11. Sources, distribution and export coefficient of phosphorus in lowland polders of Lake Taihu Basin, China.

    PubMed

    Huang, Jiacong; Gao, Junfeng; Jiang, Yong; Yin, Hongbin; Amiri, Bahman Jabbarian

    2017-12-01

    Identifying phosphorus (P) sources, distribution and export from lowland polders is important for P pollution management, however, is challenging due to the high complexity of hydrological and P transport processes in lowland areas. In this study, the spatial pattern and temporal dynamics of P export coefficient (PEC) from all the 2539 polders in Lake Taihu Basin, China were estimated using a coupled P model for describing P dynamics in a polder system. The estimated amount of P export from polders in Lake Taihu Basin during 2013 was 1916.2 t/yr, with a spatially-averaged PEC of 1.8 kg/ha/yr. PEC had peak values (more than 4.0 kg/ha/yr) in the polders near/within the large cities, and was high during the rice-cropping season. Sensitivity analysis based on the coupled P model revealed that the sensitive factors controlling the PEC varied spatially and changed through time. Precipitation and air temperature were the most sensitive factors controlling PEC. Culvert controlling and fertilization were sensitive factors controlling PEC during some periods. This study demonstrated an estimation of PEC from 2539 polders in Lake Taihu Basin, and an identification of sensitive environmental factors affecting PEC. The investigation of polder P export in a watershed scale is helpful for water managers to learn the distribution of P sources, to identify key P sources, and thus to achieve best management practice in controlling P export from lowland areas. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Description and Sensitivity Analysis of the SOLSE/LORE-2 and SAGE III Limb Scattering Ozone Retrieval Algorithms

    NASA Technical Reports Server (NTRS)

    Loughman, R.; Flittner, D.; Herman, B.; Bhartia, P.; Hilsenrath, E.; McPeters, R.; Rault, D.

    2002-01-01

    The SOLSE (Shuttle Ozone Limb Sounding Experiment) and LORE (Limb Ozone Retrieval Experiment) instruments are scheduled for reflight on Space Shuttle flight STS-107 in July 2002. In addition, the SAGE III (Stratospheric Aerosol and Gas Experiment) instrument will begin to make limb scattering measurements during Spring 2002. The optimal estimation technique is used to analyze visible and ultraviolet limb scattered radiances and produce a retrieved ozone profile. The algorithm used to analyze data from the initial flight of the SOLSE/LORE instruments (on Space Shuttle flight STS-87 in November 1997) forms the basis of the current algorithms, with expansion to take advantage of the increased multispectral information provided by SOLSE/LORE-2 and SAGE III. We also present detailed sensitivity analysis for these ozone retrieval algorithms. The primary source of ozone retrieval error is tangent height misregistration (i.e., instrument pointing error), which is relevant throughout the altitude range of interest, and can produce retrieval errors on the order of 10-20 percent due to a tangent height registration error of 0.5 km at the tangent point. Other significant sources of error are sensitivity to stratospheric aerosol and sensitivity to error in the a priori ozone estimate (given assumed instrument signal-to-noise = 200). These can produce errors up to 10 percent for the ozone retrieval at altitudes less than 20 km, but produce little error above that level.

  13. Estimating the Geocenter from GNSS Observations

    NASA Astrophysics Data System (ADS)

    Dach, Rolf; Michael, Meindl; Beutler, Gerhard; Schaer, Stefan; Lutz, Simon; Jäggi, Adrian

    2014-05-01

    The satellites of the Global Navigation Satellite Systems (GNSS) are orbiting the Earth according to the laws of celestial mechanics. As a consequence, the satellites are sensitive to the coordinates of the center of mass of the Earth. The coordinates of the (ground) tracking stations are referring to the center of figure as the conventional origin of the reference frame. The difference between the center of mass and center of figure is the instantaneous geocenter. Following this definition the global GNSS solutions are sensitive to the geocenter. Several studies demonstrated strong correlations of the GNSS-derived geocenter coordinates with parameters intended to absorb radiation pressure effects acting on the GNSS satellites, and with GNSS satellite clock parameters. One should thus pose the question to what extent these satellite-related parameters absorb (or hide) the geocenter information. A clean simulation study has been performed to answer this question. The simulation environment allows it in particular to introduce user-defined shifts of the geocenter (systematic inconsistencies between the satellite's and station's reference frames). These geocenter shifts may be recovered by the mentioned parameters - provided they were set up in the analysis. If the geocenter coordinates are not estimated, one may find out which other parameters absorb the user-defined shifts of the geocenter and to what extent. Furthermore, the simulation environment also allows it to extract the correlation matrix from the a posteriori covariance matrix to study the correlations between different parameter types of the GNSS analysis system. Our results show high degrees of correlations between geocenter coordinates, orbit-related parameters, and satellite clock parameters. These correlations are of the same order of magnitude as the correlations between station heights, troposphere, and receiver clock parameters in each regional or global GNSS network analysis. If such correlations are accepted in a GNSS analysis when estimating station coordinates, geocenter coordinates must be considered as mathematically estimable in a global GNSS analysis. The geophysical interpretation may of course become difficult, e.g., if insufficient orbit models are used.

  14. [Implication of inverse-probability weighting method in the evaluation of diagnostic test with verification bias].

    PubMed

    Kang, Leni; Zhang, Shaokai; Zhao, Fanghui; Qiao, Youlin

    2014-03-01

    To evaluate and adjust the verification bias existed in the screening or diagnostic tests. Inverse-probability weighting method was used to adjust the sensitivity and specificity of the diagnostic tests, with an example of cervical cancer screening used to introduce the Compare Tests package in R software which could be implemented. Sensitivity and specificity calculated from the traditional method and maximum likelihood estimation method were compared to the results from Inverse-probability weighting method in the random-sampled example. The true sensitivity and specificity of the HPV self-sampling test were 83.53% (95%CI:74.23-89.93)and 85.86% (95%CI: 84.23-87.36). In the analysis of data with randomly missing verification by gold standard, the sensitivity and specificity calculated by traditional method were 90.48% (95%CI:80.74-95.56)and 71.96% (95%CI:68.71-75.00), respectively. The adjusted sensitivity and specificity under the use of Inverse-probability weighting method were 82.25% (95% CI:63.11-92.62) and 85.80% (95% CI: 85.09-86.47), respectively, whereas they were 80.13% (95%CI:66.81-93.46)and 85.80% (95%CI: 84.20-87.41) under the maximum likelihood estimation method. The inverse-probability weighting method could effectively adjust the sensitivity and specificity of a diagnostic test when verification bias existed, especially when complex sampling appeared.

  15. Improving FIA trend analysis through model-based estimation using landsat disturbance maps and the forest vegetation simulator

    Treesearch

    Sean P. Healey; Gretchen G. Moisen; Paul L. Patterson

    2012-01-01

    The Forest Inventory and Analysis (FIA) Program's panel system, in which 10-20 percent of the sample is measured in any given year, is designed to increase the currency of FIA reporting and its sensitivity to factors operating at relatively fine temporal scales. Now that much of the country has completed at least one measurement cycle over all panels, there is an...

  16. Breathing dynamics based parameter sensitivity analysis of hetero-polymeric DNA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talukder, Srijeeta; Sen, Shrabani; Chaudhury, Pinaki, E-mail: pinakc@rediffmail.com

    We study the parameter sensitivity of hetero-polymeric DNA within the purview of DNA breathing dynamics. The degree of correlation between the mean bubble size and the model parameters is estimated for this purpose for three different DNA sequences. The analysis leads us to a better understanding of the sequence dependent nature of the breathing dynamics of hetero-polymeric DNA. Out of the 14 model parameters for DNA stability in the statistical Poland-Scheraga approach, the hydrogen bond interaction ε{sub hb}(AT) for an AT base pair and the ring factor ξ turn out to be the most sensitive parameters. In addition, the stackingmore » interaction ε{sub st}(TA-TA) for an TA-TA nearest neighbor pair of base-pairs is found to be the most sensitive one among all stacking interactions. Moreover, we also establish that the nature of stacking interaction has a deciding effect on the DNA breathing dynamics, not the number of times a particular stacking interaction appears in a sequence. We show that the sensitivity analysis can be used as an effective measure to guide a stochastic optimization technique to find the kinetic rate constants related to the dynamics as opposed to the case where the rate constants are measured using the conventional unbiased way of optimization.« less

  17. Network modelling methods for FMRI.

    PubMed

    Smith, Stephen M; Miller, Karla L; Salimi-Khorshidi, Gholamreza; Webster, Matthew; Beckmann, Christian F; Nichols, Thomas E; Ramsey, Joseph D; Woolrich, Mark W

    2011-01-15

    There is great interest in estimating brain "networks" from FMRI data. This is often attempted by identifying a set of functional "nodes" (e.g., spatial ROIs or ICA maps) and then conducting a connectivity analysis between the nodes, based on the FMRI timeseries associated with the nodes. Analysis methods range from very simple measures that consider just two nodes at a time (e.g., correlation between two nodes' timeseries) to sophisticated approaches that consider all nodes simultaneously and estimate one global network model (e.g., Bayes net models). Many different methods are being used in the literature, but almost none has been carefully validated or compared for use on FMRI timeseries data. In this work we generate rich, realistic simulated FMRI data for a wide range of underlying networks, experimental protocols and problematic confounds in the data, in order to compare different connectivity estimation approaches. Our results show that in general correlation-based approaches can be quite successful, methods based on higher-order statistics are less sensitive, and lag-based approaches perform very poorly. More specifically: there are several methods that can give high sensitivity to network connection detection on good quality FMRI data, in particular, partial correlation, regularised inverse covariance estimation and several Bayes net methods; however, accurate estimation of connection directionality is more difficult to achieve, though Patel's τ can be reasonably successful. With respect to the various confounds added to the data, the most striking result was that the use of functionally inaccurate ROIs (when defining the network nodes and extracting their associated timeseries) is extremely damaging to network estimation; hence, results derived from inappropriate ROI definition (such as via structural atlases) should be regarded with great caution. Copyright © 2010 Elsevier Inc. All rights reserved.

  18. Cost of improving Access to Psychological Therapies (IAPT) programme: an analysis of cost of session, treatment and recovery in selected Primary Care Trusts in the East of England region.

    PubMed

    Radhakrishnan, Muralikrishnan; Hammond, Geoffrey; Jones, Peter B; Watson, Alison; McMillan-Shields, Fiona; Lafortune, Louise

    2013-01-01

    Recent literature on Improving Access to Psychological Therapies (IAPT) has reported on improvements in clinical outcomes, changes in employment status and the concept of recovery attributable to IAPT treatment, but not on the costs of the programme. This article reports the costs associated with a single session, completed course of treatment and recovery for four treatment courses (i.e., remaining in low or high intensity treatment, stepping up or down) in IAPT services in 5 East of England region Primary Care Trusts. Costs were estimated using treatment activity data and gross financial information, along with assumptions about how these financial data could be broken down. The estimated average cost of a high intensity session was £177 and the average cost for a low intensity session was £99. The average cost of treatment was £493 (low intensity), £1416 (high intensity), £699 (stepped down), £1514 (stepped up) and £877 (All). The cost per recovered patient was £1043 (low intensity), £2895 (high intensity), £1653 (stepped down), £2914 (stepped up) and £1766 (All). Sensitivity analysis revealed that the costs are sensitive to cost ratio assumptions, indicating that inaccurate ratios are likely to influence overall estimates. Results indicate the cost per session exceeds previously reported estimates, but cost of treatment is only marginally higher. The current cost estimates are supportive of the originally proposed IAPT model on cost-benefit grounds. The study also provides a framework to estimate costs using financial data, especially when programmes have block contract arrangements. Replication and additional analyses along with evidence-based discussion regarding alternative, cost-effective methods of intervention is recommended. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. A novel integrated approach for the hazardous radioactive dust source terms estimation in future nuclear fusion power plants.

    PubMed

    Poggi, L A; Malizia, A; Ciparisse, J F; Gaudio, P

    2016-10-01

    An open issue still under investigation by several international entities working on the safety and security field for the foreseen nuclear fusion reactors is the estimation of source terms that are a hazard for the operators and public, and for the machine itself in terms of efficiency and integrity in case of severe accident scenarios. Source term estimation is a crucial key safety issue to be addressed in the future reactors safety assessments, and the estimates available at the time are not sufficiently satisfactory. The lack of neutronic data along with the insufficiently accurate methodologies used until now, calls for an integrated methodology for source term estimation that can provide predictions with an adequate accuracy. This work proposes a complete methodology to estimate dust source terms starting from a broad information gathering. The wide number of parameters that can influence dust source term production is reduced with statistical tools using a combination of screening, sensitivity analysis, and uncertainty analysis. Finally, a preliminary and simplified methodology for dust source term production prediction for future devices is presented.

  20. Unscented Kalman filter with parameter identifiability analysis for the estimation of multiple parameters in kinetic models

    PubMed Central

    2011-01-01

    In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. What differentiates this approach is the integration of an orthogonal-based local identifiability method into the unscented Kalman filter (UKF), rather than using the more common observability-based method which has inherent limitations. It also introduces a variable step size based on the system uncertainty of the UKF during the sensitivity calculation. This method identified 10 out of 12 parameters as identifiable. These ten parameters were estimated using the UKF, which was run 97 times. Throughout the repetitions the UKF proved to be more consistent than the estimation algorithms used for comparison. PMID:21989173

  1. Dual Extended Kalman Filter for the Identification of Time-Varying Human Manual Control Behavior

    NASA Technical Reports Server (NTRS)

    Popovici, Alexandru; Zaal, Peter M. T.; Pool, Daan M.

    2017-01-01

    A Dual Extended Kalman Filter was implemented for the identification of time-varying human manual control behavior. Two filters that run concurrently were used, a state filter that estimates the equalization dynamics, and a parameter filter that estimates the neuromuscular parameters and time delay. Time-varying parameters were modeled as a random walk. The filter successfully estimated time-varying human control behavior in both simulated and experimental data. Simple guidelines are proposed for the tuning of the process and measurement covariance matrices and the initial parameter estimates. The tuning was performed on simulation data, and when applied on experimental data, only an increase in measurement process noise power was required in order for the filter to converge and estimate all parameters. A sensitivity analysis to initial parameter estimates showed that the filter is more sensitive to poor initial choices of neuromuscular parameters than equalization parameters, and bad choices for initial parameters can result in divergence, slow convergence, or parameter estimates that do not have a real physical interpretation. The promising results when applied to experimental data, together with its simple tuning and low dimension of the state-space, make the use of the Dual Extended Kalman Filter a viable option for identifying time-varying human control parameters in manual tracking tasks, which could be used in real-time human state monitoring and adaptive human-vehicle haptic interfaces.

  2. Improved Correction of Misclassification Bias With Bootstrap Imputation.

    PubMed

    van Walraven, Carl

    2018-07-01

    Diagnostic codes used in administrative database research can create bias due to misclassification. Quantitative bias analysis (QBA) can correct for this bias, requires only code sensitivity and specificity, but may return invalid results. Bootstrap imputation (BI) can also address misclassification bias but traditionally requires multivariate models to accurately estimate disease probability. This study compared misclassification bias correction using QBA and BI. Serum creatinine measures were used to determine severe renal failure status in 100,000 hospitalized patients. Prevalence of severe renal failure in 86 patient strata and its association with 43 covariates was determined and compared with results in which renal failure status was determined using diagnostic codes (sensitivity 71.3%, specificity 96.2%). Differences in results (misclassification bias) were then corrected with QBA or BI (using progressively more complex methods to estimate disease probability). In total, 7.4% of patients had severe renal failure. Imputing disease status with diagnostic codes exaggerated prevalence estimates [median relative change (range), 16.6% (0.8%-74.5%)] and its association with covariates [median (range) exponentiated absolute parameter estimate difference, 1.16 (1.01-2.04)]. QBA produced invalid results 9.3% of the time and increased bias in estimates of both disease prevalence and covariate associations. BI decreased misclassification bias with increasingly accurate disease probability estimates. QBA can produce invalid results and increase misclassification bias. BI avoids invalid results and can importantly decrease misclassification bias when accurate disease probability estimates are used.

  3. A Two-Stage Method to Determine Optimal Product Sampling considering Dynamic Potential Market

    PubMed Central

    Hu, Zhineng; Lu, Wei; Han, Bing

    2015-01-01

    This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level. PMID:25821847

  4. Tutorial in Biostatistics: Instrumental Variable Methods for Causal Inference*

    PubMed Central

    Baiocchi, Michael; Cheng, Jing; Small, Dylan S.

    2014-01-01

    A goal of many health studies is to determine the causal effect of a treatment or intervention on health outcomes. Often, it is not ethically or practically possible to conduct a perfectly randomized experiment and instead an observational study must be used. A major challenge to the validity of observational studies is the possibility of unmeasured confounding (i.e., unmeasured ways in which the treatment and control groups differ before treatment administration which also affect the outcome). Instrumental variables analysis is a method for controlling for unmeasured confounding. This type of analysis requires the measurement of a valid instrumental variable, which is a variable that (i) is independent of the unmeasured confounding; (ii) affects the treatment; and (iii) affects the outcome only indirectly through its effect on the treatment. This tutorial discusses the types of causal effects that can be estimated by instrumental variables analysis; the assumptions needed for instrumental variables analysis to provide valid estimates of causal effects and sensitivity analysis for those assumptions; methods of estimation of causal effects using instrumental variables; and sources of instrumental variables in health studies. PMID:24599889

  5. Diagnostic Accuracy and Cost-Effectiveness of Alternative Methods for Detection of Soil-Transmitted Helminths in a Post-Treatment Setting in Western Kenya

    PubMed Central

    Kepha, Stella; Kihara, Jimmy H.; Njenga, Sammy M.; Pullan, Rachel L.; Brooker, Simon J.

    2014-01-01

    Objectives This study evaluates the diagnostic accuracy and cost-effectiveness of the Kato-Katz and Mini-FLOTAC methods for detection of soil-transmitted helminths (STH) in a post-treatment setting in western Kenya. A cost analysis also explores the cost implications of collecting samples during school surveys when compared to household surveys. Methods Stool samples were collected from children (n = 652) attending 18 schools in Bungoma County and diagnosed by the Kato-Katz and Mini-FLOTAC coprological methods. Sensitivity and additional diagnostic performance measures were analyzed using Bayesian latent class modeling. Financial and economic costs were calculated for all survey and diagnostic activities, and cost per child tested, cost per case detected and cost per STH infection correctly classified were estimated. A sensitivity analysis was conducted to assess the impact of various survey parameters on cost estimates. Results Both diagnostic methods exhibited comparable sensitivity for detection of any STH species over single and consecutive day sampling: 52.0% for single day Kato-Katz; 49.1% for single-day Mini-FLOTAC; 76.9% for consecutive day Kato-Katz; and 74.1% for consecutive day Mini-FLOTAC. Diagnostic performance did not differ significantly between methods for the different STH species. Use of Kato-Katz with school-based sampling was the lowest cost scenario for cost per child tested ($10.14) and cost per case correctly classified ($12.84). Cost per case detected was lowest for Kato-Katz used in community-based sampling ($128.24). Sensitivity analysis revealed the cost of case detection for any STH decreased non-linearly as prevalence rates increased and was influenced by the number of samples collected. Conclusions The Kato-Katz method was comparable in diagnostic sensitivity to the Mini-FLOTAC method, but afforded greater cost-effectiveness. Future work is required to evaluate the cost-effectiveness of STH surveillance in different settings. PMID:24810593

  6. Diagnostic accuracy and cost-effectiveness of alternative methods for detection of soil-transmitted helminths in a post-treatment setting in western Kenya.

    PubMed

    Assefa, Liya M; Crellen, Thomas; Kepha, Stella; Kihara, Jimmy H; Njenga, Sammy M; Pullan, Rachel L; Brooker, Simon J

    2014-05-01

    This study evaluates the diagnostic accuracy and cost-effectiveness of the Kato-Katz and Mini-FLOTAC methods for detection of soil-transmitted helminths (STH) in a post-treatment setting in western Kenya. A cost analysis also explores the cost implications of collecting samples during school surveys when compared to household surveys. Stool samples were collected from children (n = 652) attending 18 schools in Bungoma County and diagnosed by the Kato-Katz and Mini-FLOTAC coprological methods. Sensitivity and additional diagnostic performance measures were analyzed using Bayesian latent class modeling. Financial and economic costs were calculated for all survey and diagnostic activities, and cost per child tested, cost per case detected and cost per STH infection correctly classified were estimated. A sensitivity analysis was conducted to assess the impact of various survey parameters on cost estimates. Both diagnostic methods exhibited comparable sensitivity for detection of any STH species over single and consecutive day sampling: 52.0% for single day Kato-Katz; 49.1% for single-day Mini-FLOTAC; 76.9% for consecutive day Kato-Katz; and 74.1% for consecutive day Mini-FLOTAC. Diagnostic performance did not differ significantly between methods for the different STH species. Use of Kato-Katz with school-based sampling was the lowest cost scenario for cost per child tested ($10.14) and cost per case correctly classified ($12.84). Cost per case detected was lowest for Kato-Katz used in community-based sampling ($128.24). Sensitivity analysis revealed the cost of case detection for any STH decreased non-linearly as prevalence rates increased and was influenced by the number of samples collected. The Kato-Katz method was comparable in diagnostic sensitivity to the Mini-FLOTAC method, but afforded greater cost-effectiveness. Future work is required to evaluate the cost-effectiveness of STH surveillance in different settings.

  7. Parameter optimization, sensitivity, and uncertainty analysis of an ecosystem model at a forest flux tower site in the United States

    USGS Publications Warehouse

    Wu, Yiping; Liu, Shuguang; Huang, Zhihong; Yan, Wende

    2014-01-01

    Ecosystem models are useful tools for understanding ecological processes and for sustainable management of resources. In biogeochemical field, numerical models have been widely used for investigating carbon dynamics under global changes from site to regional and global scales. However, it is still challenging to optimize parameters and estimate parameterization uncertainty for complex process-based models such as the Erosion Deposition Carbon Model (EDCM), a modified version of CENTURY, that consider carbon, water, and nutrient cycles of ecosystems. This study was designed to conduct the parameter identifiability, optimization, sensitivity, and uncertainty analysis of EDCM using our developed EDCM-Auto, which incorporated a comprehensive R package—Flexible Modeling Framework (FME) and the Shuffled Complex Evolution (SCE) algorithm. Using a forest flux tower site as a case study, we implemented a comprehensive modeling analysis involving nine parameters and four target variables (carbon and water fluxes) with their corresponding measurements based on the eddy covariance technique. The local sensitivity analysis shows that the plant production-related parameters (e.g., PPDF1 and PRDX) are most sensitive to the model cost function. Both SCE and FME are comparable and performed well in deriving the optimal parameter set with satisfactory simulations of target variables. Global sensitivity and uncertainty analysis indicate that the parameter uncertainty and the resulting output uncertainty can be quantified, and that the magnitude of parameter-uncertainty effects depends on variables and seasons. This study also demonstrates that using the cutting-edge R functions such as FME can be feasible and attractive for conducting comprehensive parameter analysis for ecosystem modeling.

  8. Issues in the economic evaluation of influenza vaccination by injection of healthy working adults in the US: a review and decision analysis of ten published studies.

    PubMed

    Hogan, Thomas J

    2012-05-01

    The objective was to review recent economic evaluations of influenza vaccination by injection in the US, assess their evidence, and conclude on their collective findings. The literature was searched for economic evaluations of influenza vaccination injection in healthy working adults in the US published since 1995. Ten evaluations described in nine papers were identified. These were synopsized and their results evaluated, the basic structure of all evaluations was ascertained, and sensitivity of outcomes to changes in parameter values were explored using a decision model. Areas to improve economic evaluations were noted. Eight of nine evaluations with credible economic outcomes were favourable to vaccination, representing a statistically significant result compared with a proportion of 50% that would be expected if vaccination and no vaccination were economically equivalent. Evaluations shared a basic structure, but differed considerably with respect to cost components, assumptions, methods, and parameter estimates. Sensitivity analysis indicated that changes in parameter values within the feasible range, individually or simultaneously, could reverse economic outcomes. Given stated misgivings, the methods of estimating influenza reduction ascribed to vaccination must be researched to confirm that they produce accurate and reliable estimates. Research is also needed to improve estimates of the costs per case of influenza illness and the costs of vaccination. Based on their assumptions, the reviewed papers collectively appear to support the economic benefits of influenza vaccination of healthy adults. Yet the underlying assumptions, methods and parameter estimates themselves warrant further research to confirm they are accurate, reliable and appropriate to economic evaluation purposes.

  9. Sensitivity analysis for mistakenly adjusting for mediators in estimating total effect in observational studies.

    PubMed

    Wang, Tingting; Li, Hongkai; Su, Ping; Yu, Yuanyuan; Sun, Xiaoru; Liu, Yi; Yuan, Zhongshang; Xue, Fuzhong

    2017-11-20

    In observational studies, epidemiologists often attempt to estimate the total effect of an exposure on an outcome of interest. However, when the underlying diagram is unknown and limited knowledge is available, dissecting bias performances is essential to estimating the total effect of an exposure on an outcome when mistakenly adjusting for mediators under logistic regression. Through simulation, we focused on six causal diagrams concerning different roles of mediators. Sensitivity analysis was conducted to assess the bias performances of varying across exposure-mediator effects and mediator-outcome effects when adjusting for the mediator. Based on the causal relationships in the real world, we compared the biases of varying across the effects of exposure-mediator with those of varying across the effects of mediator-outcome when adjusting for the mediator. The magnitude of the bias was defined by the difference between the estimated effect (using logistic regression) and the total effect of the exposure on the outcome. In four scenarios (a single mediator, two series mediators, two independent parallel mediators or two correlated parallel mediators), the biases of varying across the effects of exposure-mediator were greater than those of varying across the effects of mediator-outcome when adjusting for the mediator. In contrast, in two other scenarios (a single mediator or two independent parallel mediators in the presence of unobserved confounders), the biases of varying across the effects of exposure-mediator were less than those of varying across the effects of mediator-outcome when adjusting for the mediator. The biases were more sensitive to the variation of effects of exposure-mediator than the effects of mediator-outcome when adjusting for the mediator in the absence of unobserved confounders, while the biases were more sensitive to the variation of effects of mediator-outcome than those of exposure-mediator in the presence of an unobserved confounder. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  10. Performance of a high-sensitivity dedicated cardiac SPECT scanner for striatal uptake quantification in the brain based on analysis of projection data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Mi-Ae; Moore, Stephen C.; McQuaid, Sarah J.

    Purpose: The authors have previously reported the advantages of high-sensitivity single-photon emission computed tomography (SPECT) systems for imaging structures located deep inside the brain. DaTscan (Isoflupane I-123) is a dopamine transporter (DaT) imaging agent that has shown potential for early detection of Parkinson disease (PD), as well as for monitoring progression of the disease. Realizing the full potential of DaTscan requires efficient estimation of striatal uptake from SPECT images. They have evaluated two SPECT systems, a conventional dual-head gamma camera with low-energy high-resolution collimators (conventional) and a dedicated high-sensitivity multidetector cardiac imaging system (dedicated) for imaging tasks related to PD.more » Methods: Cramer-Rao bounds (CRB) on precision of estimates of striatal and background activity concentrations were calculated from high-count, separate acquisitions of the compartments (right striata, left striata, background) of a striatal phantom. CRB on striatal and background activity concentration were calculated from essentially noise-free projection datasets, synthesized by scaling and summing the compartment projection datasets, for a range of total detected counts. They also calculated variances of estimates of specific-to-nonspecific binding ratios (BR) and asymmetry indices from these values using propagation of error analysis, as well as the precision of measuring changes in BR on the order of the average annual decline in early PD. Results: Under typical clinical conditions, the conventional camera detected 2 M counts while the dedicated camera detected 12 M counts. Assuming a normal BR of 5, the standard deviation of BR estimates was 0.042 and 0.021 for the conventional and dedicated system, respectively. For an 8% decrease to BR = 4.6, the signal-to-noise ratio were 6.8 (conventional) and 13.3 (dedicated); for a 5% decrease, they were 4.2 (conventional) and 8.3 (dedicated). Conclusions: This implies that PD can be detected earlier with the dedicated system than with the conventional system; therefore, earlier identification of PD progression should be possible with the high-sensitivity dedicated SPECT camera.« less

  11. The estimated sensitivity and specificity of compartment pressure monitoring for acute compartment syndrome.

    PubMed

    McQueen, Margaret M; Duckworth, Andrew D; Aitken, Stuart A; Court-Brown, Charles M

    2013-04-17

    The aim of our study was to document the estimated sensitivity and specificity of continuous intracompartmental pressure monitoring for the diagnosis of acute compartment syndrome. From our prospective trauma database, we identified all patients who had sustained a tibial diaphyseal fracture over a ten-year period. A retrospective analysis of 1184 patients was performed to record and analyze the documented use of continuous intracompartmental pressure monitoring and the use of fasciotomy. A diagnosis of acute compartment syndrome was made if there was escape of muscles at fasciotomy and/or color change in the muscles or muscle necrosis intraoperatively. A diagnosis of acute compartment syndrome was considered incorrect if it was possible to close the fasciotomy wounds primarily at forty-eight hours. The absence of acute compartment syndrome was confirmed by the absence of neurological abnormality or contracture at the time of the latest follow-up. Of 979 monitored patients identified, 850 fit the inclusion criteria with a mean age of thirty-eight years (range, twelve to ninety-four years), and 598 (70.4%) were male (p < 0.001). A total of 152 patients (17.9%) underwent fasciotomy for the treatment of acute compartment syndrome: 141 had acute compartment syndrome (true positives), six did not have it (false positives), and five underwent fasciotomy despite having a normal differential pressure reading, with subsequent operative findings consistent with acute compartment syndrome (false negatives). Of the 698 patients (82.1%) who did not undergo fasciotomy, 689 had no evidence of any late sequelae of acute compartment syndrome (true negatives) at a mean follow-up time of fifty-nine weeks. The estimated sensitivity of intracompartmental pressure monitoring for suspected acute compartment syndrome was 94%, with an estimated specificity of 98%, an estimated positive predictive value of 93%, and an estimated negative predictive value of 99%. The estimated sensitivity and specificity of continuous intracompartmental pressure monitoring for the diagnosis of acute compartment syndrome following tibial diaphyseal fracture are high; continuous intracompartmental pressure monitoring should be considered for patients at risk for acute compartment syndrome.

  12. A modeling study examining the impact of nutrient boundaries ...

    EPA Pesticide Factsheets

    A mass balance eutrophication model, Gulf of Mexico Dissolved Oxygen Model (GoMDOM), has been developed and applied to describe nitrogen, phosphorus and primary production in the Louisiana shelf of the Gulf of Mexico. Features of this model include bi-directional boundary exchanges, an empirical site-specific light attenuation equation, estimates of 56 river loads and atmospheric loads. The model was calibrated for 2006 by comparing model output to observations in zones that represent different locations in the Gulf. The model exhibited reasonable skill in simulating the phosphorus and nitrogen field data and primary production observations. The model was applied to generate a nitrogen mass balance estimate, to perform sensitivity analysis to compare the importance of the nutrient boundary concentrations versus the river loads on nutrient concentrations and primary production within the shelf, and to provide insight into the relative importance of different limitation factors on primary production. The mass budget showed the importance of the rivers as the major external nitrogen source while the atmospheric load contributed approximately 2% of the total external load. Sensitivity analysis showed the importance of accurate estimates of boundary nitrogen concentrations on the nitrogen levels on the shelf, especially at regions further away from the river influences. The boundary nitrogen concentrations impacted primary production less than nitrogen concent

  13. Groundwater recharge in irrigated semi-arid areas: quantitative hydrological modelling and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Jiménez-Martínez, Joaquín; Candela, Lucila; Molinero, Jorge; Tamoh, Karim

    2010-12-01

    For semi-arid regions, methods of assessing aquifer recharge usually consider the potential evapotranspiration. Actual evapotranspiration rates can be below potential rates for long periods of time, even in irrigated systems. Accurate estimations of aquifer recharge in semi-arid areas under irrigated agriculture are essential for sustainable water-resources management. A method to estimate aquifer recharge from irrigated farmland has been tested. The water-balance-modelling approach was based on VisualBALAN v. 2.0, a computer code that simulates water balance in the soil, vadose zone and aquifer. The study was carried out in the Campo de Cartagena (SE Spain) in the period 1999-2008 for three different groups of crops: annual row crops (lettuce and melon), perennial vegetables (artichoke) and fruit trees (citrus). Computed mean-annual-recharge values (from irrigation+precipitation) during the study period were 397 mm for annual row crops, 201 mm for perennial vegetables and 194 mm for fruit trees: 31.4, 20.7 and 20.5% of the total applied water, respectively. The effects of rainfall events on the final recharge were clearly observed, due to the continuously high water content in soil which facilitated the infiltration process. A sensitivity analysis to assess the reliability and uncertainty of recharge estimations was carried out.

  14. Offshore Wind Plant Balance-of-Station Cost Drivers and Sensitivities (Poster)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saur, G.; Maples, B.; Meadows, B.

    2012-09-01

    With Balance of System (BOS) costs contributing up to 70% of the installed capital cost, it is fundamental to understanding the BOS costs for offshore wind projects as well as potential cost trends for larger offshore turbines. NREL developed a BOS model using project cost estimates developed by GL Garrad Hassan. Aspects of BOS covered include engineering and permitting, ports and staging, transportation and installation, vessels, foundations, and electrical. The data introduce new scaling relationships for each BOS component to estimate cost as a function of turbine parameters and size, project parameters and size, and soil type. Based on themore » new BOS model, an analysis to understand the non-turbine costs associated with offshore turbine sizes ranging from 3 MW to 6 MW and offshore wind plant sizes ranging from 100 MW to 1000 MW has been conducted. This analysis establishes a more robust baseline cost estimate, identifies the largest cost components of offshore wind project BOS, and explores the sensitivity of the levelized cost of energy to permutations in each BOS cost element. This presentation shows results from the model that illustrates the potential impact of turbine size and project size on the cost of energy from US offshore wind plants.« less

  15. Budget impact analysis of pemetrexed introduction: case study from a teaching hospital perspective, Thailand.

    PubMed

    Chanjaruporn, Farsai; Roughead, Elizabeth E; Sooksriwong, Cha-oncin; Kaojarern, Sming

    2011-09-01

    Thailand does not currently require Budget Impact Analysis (BIA) assessment. The present study aimed to estimate the annual drug cost and the incremental impact on the hospital pharmaceutical budget of the introduction of pemetrexed to a Thai teaching hospital. The budget impact model was conducted in accordance with the Guidelines for preparing submissions to the Pharmaceutical Benefits Advisory Committee (PBAC). The model variables consisted of number of patients, growth rate of lung cancer, uptake rate of pemetrexed over time, unit prices of drugs, and the length and cost of treatment. Sensitivity analysis was performed to determine changes in budgetary impact due to variation of parameters or assumptions in the model. The introduction of pemetrexed was estimated to cause considerable costs for the teaching hospital. In the base-case analysis, the incremental costs were estimated at 8,553,984 Baht in the first year increasing to 12, 118, 144 Baht, 17,820,800 Baht and 17,820,800 Baht in the following years. The 4-year net budgetary impact was 20,154,480 Baht or approximately 127,560 Baht per patient. Sensitivity analyses found that number of treatment cycles andproportion of patients assumed to be treated with pemetrexed were the two most important influencing factors in the model. New costly innovative interventions should be evaluated using the BIA model to determine whether they are affordable. The Thai government should consider requiring the BIA study as one of the requirements for drug submission to assist in the determination of listing and subsidizing decision for medicines.

  16. Speed Profiles for Improvement of Maritime Emission Estimation.

    PubMed

    Yau, Pui Shan; Lee, Shun-Cheng; Ho, Kin Fai

    2012-12-01

    Maritime emissions play an important role in anthropogenic emissions, particularly for cities with busy ports such as Hong Kong. Ship emissions are strongly dependent on vessel speed, and thus accurate vessel speed is essential for maritime emission studies. In this study, we determined minute-by-minute high-resolution speed profiles of container ships on four major routes in Hong Kong waters using Automatic Identification System (AIS). The activity-based ship emissions of NO(x), CO, HC, CO(2), SO(2), and PM(10) were estimated using derived vessel speed profiles, and results were compared with those using the speed limits of control zones. Estimation using speed limits resulted in up to twofold overestimation of ship emissions. Compared with emissions estimated using the speed limits of control zones, emissions estimated using vessel speed profiles could provide results with up to 88% higher accuracy. Uncertainty analysis and sensitivity analysis of the model demonstrated the significance of improvement of vessel speed resolution. From spatial analysis, it is revealed that SO(2) and PM(10) emissions during maneuvering within 1 nautical mile from port were the highest. They contributed 7%-22% of SO(2) emissions and 8%-17% of PM(10) emissions of the entire voyage in Hong Kong.

  17. Near-surface compressional and shear wave speeds constrained by body-wave polarization analysis

    NASA Astrophysics Data System (ADS)

    Park, Sunyoung; Ishii, Miaki

    2018-06-01

    A new technique to constrain near-surface seismic structure that relates body-wave polarization direction to the wave speed immediately beneath a seismic station is presented. The P-wave polarization direction is only sensitive to shear wave speed but not to compressional wave speed, while the S-wave polarization direction is sensitive to both wave speeds. The technique is applied to data from the High-Sensitivity Seismograph Network in Japan, and the results show that the wave speed estimates obtained from polarization analysis are compatible with those from borehole measurements. The lateral variations in wave speeds correlate with geological and physical features such as topography and volcanoes. The technique requires minimal computation resources, and can be used on any number of three-component teleseismic recordings, opening opportunities for non-invasive and inexpensive study of the shallowest (˜100 m) crustal structures.

  18. Sensitivity of predicted bioaerosol exposure from open windrow composting facilities to ADMS dispersion model parameters.

    PubMed

    Douglas, P; Tyrrel, S F; Kinnersley, R P; Whelan, M; Longhurst, P J; Walsh, K; Pollard, S J T; Drew, G H

    2016-12-15

    Bioaerosols are released in elevated quantities from composting facilities and are associated with negative health effects, although dose-response relationships are not well understood, and require improved exposure classification. Dispersion modelling has great potential to improve exposure classification, but has not yet been extensively used or validated in this context. We present a sensitivity analysis of the ADMS dispersion model specific to input parameter ranges relevant to bioaerosol emissions from open windrow composting. This analysis provides an aid for model calibration by prioritising parameter adjustment and targeting independent parameter estimation. Results showed that predicted exposure was most sensitive to the wet and dry deposition modules and the majority of parameters relating to emission source characteristics, including pollutant emission velocity, source geometry and source height. This research improves understanding of the accuracy of model input data required to provide more reliable exposure predictions. Copyright © 2016. Published by Elsevier Ltd.

  19. Global sensitivity analysis of groundwater transport

    NASA Astrophysics Data System (ADS)

    Cvetkovic, V.; Soltani, S.; Vigouroux, G.

    2015-12-01

    In this work we address the model and parametric sensitivity of groundwater transport using the Lagrangian-Stochastic Advection-Reaction (LaSAR) methodology. The 'attenuation index' is used as a relevant and convenient measure of the coupled transport mechanisms. The coefficients of variation (CV) for seven uncertain parameters are assumed to be between 0.25 and 3.5, the highest value being for the lower bound of the mass transfer coefficient k0 . In almost all cases, the uncertainties in the macro-dispersion (CV = 0.35) and in the mass transfer rate k0 (CV = 3.5) are most significant. The global sensitivity analysis using Sobol and derivative-based indices yield consistent rankings on the significance of different models and/or parameter ranges. The results presented here are generic however the proposed methodology can be easily adapted to specific conditions where uncertainty ranges in models and/or parameters can be estimated from field and/or laboratory measurements.

  20. [Test and programme sensitivities of screening for colorectal cancer in Reggio Emilia].

    PubMed

    Campari, Cinzia; Sassatelli, Romano; Paterlini, Luisa; Camellini, Lorenzo; Menozzi, Patrizia; Cattani, Antonella

    2011-01-01

    to estimate the sensitivity of the immunochemical test for faecal occult blood (FOBT) and the sensitivity of the colorectal tumour screening programme in the province of Reggio Emilia. retrospective cohort study, including a sample of 80,357 people of both genders, aged 50-69, who underwent FOBT, during the first round of the screening programme in the province of Reggio Emilia, from April 2005 to December 2007. incidence of interval cancer. The proportional incidence method was used to estimate the sensitivity of FOBT and of the screening programme. Data were stratified according to gender, age and year of interval. the overall sensitivity of FOBT was 73.2% (95%IC 63.8-80.7). The sensitivity of FOBT was lower in females (70.5% vs 75.1%), higher in the 50-59 age group (78.6% vs 70.2%) and higher in the colon than rectum (75.1% vs 68.9%). The test had a significantly higher sensitivity in the 1st year of interval than in the 2nd (84.4% vs 60.5%; RR=0.39, 95%IC 0.22-0.70), a difference which was confirmed, also when data were stratified according to gender. The overall sensitivity of the programme is 70.9% (95%IC 61.5-78.5). No statistically significant differences were shown, if data were stratified according to gender, age or site. Again the sensitivity in the 1st year was significantly higher than in the 2nd year of interval (83.2% vs 57.0%; RR=0.41, 95%IC 0.24-0.69). Overall our data confirmed the findings of similar Italian studies, despite subgroup analysis showed some differences in sensitivity in our study.

  1. Analysis of the NAEG model of transuranic radionuclide transport and dose

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kercher, J.R.; Anspaugh, L.R.

    We analyze the model for estimating the dose from /sup 239/Pu developed for the Nevada Applied Ecology Group (NAEG) by using sensitivity analysis and uncertainty analysis. Sensitivity analysis results suggest that the air pathway is the critical pathway for the organs receiving the highest dose. Soil concentration and the factors controlling air concentration are the most important parameters. The only organ whose dose is sensitive to parameters in the ingestion pathway is the GI tract. The air pathway accounts for 100% of the dose to lung, upper respiratory tract, and thoracic lymph nodes; and 95% of its dose via ingestion.more » Leafy vegetable ingestion accounts for 70% of the dose from the ingestion pathway regardless of organ, peeled vegetables 20%; accidental soil ingestion 5%; ingestion of beef liver 4%; beef muscle 1%. Only a handful of model parameters control the dose for any one organ. The number of important parameters is usually less than 10. Uncertainty analysis indicates that choosing a uniform distribution for the input parameters produces a lognormal distribution of the dose. The ratio of the square root of the variance to the mean is three times greater for the doses than it is for the individual parameters. As found by the sensitivity analysis, the uncertainty analysis suggests that only a few parameters control the dose for each organ. All organs have similar distributions and variance to mean ratios except for the lymph modes. 16 references, 9 figures, 13 tables.« less

  2. A practical guide to propensity score analysis for applied clinical research.

    PubMed

    Lee, Jaehoon; Little, Todd D

    2017-11-01

    Observational studies are often the only viable options in many clinical settings, especially when it is unethical or infeasible to randomly assign participants to different treatment régimes. In such case propensity score (PS) analysis can be applied to accounting for possible selection bias and thereby addressing questions of causal inference. Many PS methods exist, yet few guidelines are available to aid applied researchers in their conduct and evaluation of a PS analysis. In this article we give an overview of available techniques for PS estimation and application, balance diagnostic, treatment effect estimation, and sensitivity assessment, as well as recent advances. We also offer a tutorial that can be used to emulate the steps of PS analysis. Our goal is to provide information that will bring PS analysis within the reach of applied clinical researchers and practitioners. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Estimating the Triple-Point Isotope Effect and the Corresponding Uncertainties for Cryogenic Fixed Points

    NASA Astrophysics Data System (ADS)

    Tew, W. L.

    2008-02-01

    The sensitivities of melting temperatures to isotopic variations in monatomic and diatomic atmospheric gases using both theoretical and semi-empirical methods are estimated. The current state of knowledge of the vapor-pressure isotope effects (VPIE) and triple-point isotope effects (TPIE) is briefly summarized for the noble gases (except He), and for selected diatomic molecules including oxygen. An approximate expression is derived to estimate the relative shift in the melting temperature with isotopic substitution. In general, the magnitude of the effects diminishes with increasing molecular mass and increasing temperature. Knowledge of the VPIE, molar volumes, and heat of fusion are sufficient to estimate the temperature shift or isotopic sensitivity coefficient via the derived expression. The usefulness of this approach is demonstrated in the estimation of isotopic sensitivities and uncertainties for triple points of xenon and molecular oxygen for which few documented estimates were previously available. The calculated sensitivities from this study are considerably higher than previous estimates for Xe, and lower than other estimates in the case of oxygen. In both these cases, the predicted sensitivities are small and the resulting variations in triple point temperatures due to mass fractionation effects are less than 20 μK.

  4. keV-Scale sterile neutrino sensitivity estimation with time-of-flight spectroscopy in KATRIN using self-consistent approximate Monte Carlo

    NASA Astrophysics Data System (ADS)

    Steinbrink, Nicholas M. N.; Behrens, Jan D.; Mertens, Susanne; Ranitzsch, Philipp C.-O.; Weinheimer, Christian

    2018-03-01

    We investigate the sensitivity of the Karlsruhe Tritium Neutrino Experiment (KATRIN) to keV-scale sterile neutrinos, which are promising dark matter candidates. Since the active-sterile mixing would lead to a second component in the tritium β-spectrum with a weak relative intensity of order sin ^2θ ≲ 10^{-6}, additional experimental strategies are required to extract this small signature and to eliminate systematics. A possible strategy is to run the experiment in an alternative time-of-flight (TOF) mode, yielding differential TOF spectra in contrast to the integrating standard mode. In order to estimate the sensitivity from a reduced sample size, a new analysis method, called self-consistent approximate Monte Carlo (SCAMC), has been developed. The simulations show that an ideal TOF mode would be able to achieve a statistical sensitivity of sin ^2θ ˜ 5 × 10^{-9} at one σ , improving the standard mode by approximately a factor two. This relative benefit grows significantly if additional exemplary systematics are considered. A possible implementation of the TOF mode with existing hardware, called gated filtering, is investigated, which, however, comes at the price of a reduced average signal rate.

  5. Estimating the neutrally buoyant energy density of a Rankine-cycle/fuel-cell underwater propulsion system

    NASA Astrophysics Data System (ADS)

    Waters, Daniel F.; Cadou, Christopher P.

    2014-02-01

    A unique requirement of underwater vehicles' power/energy systems is that they remain neutrally buoyant over the course of a mission. Previous work published in the Journal of Power Sources reported gross as opposed to neutrally-buoyant energy densities of an integrated solid oxide fuel cell/Rankine-cycle based power system based on the exothermic reaction of aluminum with seawater. This paper corrects this shortcoming by presenting a model for estimating system mass and using it to update the key findings of the original paper in the context of the neutral buoyancy requirement. It also presents an expanded sensitivity analysis to illustrate the influence of various design and modeling assumptions. While energy density is very sensitive to turbine efficiency (sensitivity coefficient in excess of 0.60), it is relatively insensitive to all other major design parameters (sensitivity coefficients < 0.15) like compressor efficiency, inlet water temperature, scaling methodology, etc. The neutral buoyancy requirement introduces a significant (∼15%) energy density penalty but overall the system still appears to offer factors of five to eight improvements in energy density (i.e., vehicle range/endurance) over present battery-based technologies.

  6. A New Method for Assessing How Sensitivity and Specificity of Linkage Studies Affects Estimation

    PubMed Central

    Moore, Cecilia L.; Amin, Janaki; Gidding, Heather F.; Law, Matthew G.

    2014-01-01

    Background While the importance of record linkage is widely recognised, few studies have attempted to quantify how linkage errors may have impacted on their own findings and outcomes. Even where authors of linkage studies have attempted to estimate sensitivity and specificity based on subjects with known status, the effects of false negatives and positives on event rates and estimates of effect are not often described. Methods We present quantification of the effect of sensitivity and specificity of the linkage process on event rates and incidence, as well as the resultant effect on relative risks. Formulae to estimate the true number of events and estimated relative risk adjusted for given linkage sensitivity and specificity are then derived and applied to data from a prisoner mortality study. The implications of false positive and false negative matches are also discussed. Discussion Comparisons of the effect of sensitivity and specificity on incidence and relative risks indicate that it is more important for linkages to be highly specific than sensitive, particularly if true incidence rates are low. We would recommend that, where possible, some quantitative estimates of the sensitivity and specificity of the linkage process be performed, allowing the effect of these quantities on observed results to be assessed. PMID:25068293

  7. Competing risk models in reliability systems, a weibull distribution model with bayesian analysis approach

    NASA Astrophysics Data System (ADS)

    Iskandar, Ismed; Satria Gondokaryono, Yudi

    2016-02-01

    In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range between the true value and the maximum likelihood estimated value lines.

  8. Estimation of genetic variance for macro- and micro-environmental sensitivity using double hierarchical generalized linear models.

    PubMed

    Mulder, Han A; Rönnegård, Lars; Fikse, W Freddy; Veerkamp, Roel F; Strandberg, Erling

    2013-07-04

    Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike's information criterion using h-likelihood to select the best fitting model. We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike's information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike's information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring.

  9. A Cost-Effectiveness Analysis of Clopidogrel for Patients with Non-ST-Segment Elevation Acute Coronary Syndrome in China.

    PubMed

    Cui, Ming; Tu, Chen Chen; Chen, Er Zhen; Wang, Xiao Li; Tan, Seng Chuen; Chen, Can

    2016-09-01

    There are a number of economic evaluation studies of clopidogrel for patients with non-ST-segment elevation acute coronary syndrome (NSTEACS) published from the perspective of multiple countries in recent years. However, relevant research is quite limited in China. We aimed to estimate the long-term cost effectiveness for up to 1-year treatment with clopidogrel plus acetylsalicylic acid (ASA) versus ASA alone for NSTEACS from the public payer perspective in China. This analysis used a Markov model to simulate a cohort of patients for quality-adjusted life years (QALYs) gained and incremental cost for lifetime horizon. Based on the primary event rates, adherence rate, and mortality derived from the CURE trial, hazard functions obtained from published literature were used to extrapolate the overall survival to lifetime horizon. Resource utilization, hospitalization, medication costs, and utility values were estimated from official reports, published literature, and analysis of the patient-level insurance data in China. To assess the impact of parameters' uncertainty on cost-effectiveness results, one-way sensitivity analyses were undertaken for key parameters, and probabilistic sensitivity analysis (PSA) was conducted using the Monte Carlo simulation. The therapy of clopidogrel plus ASA is a cost-effective option in comparison with ASA alone for the treatment of NSTEACS in China, leading to 0.0548 life years (LYs) and 0.0518 QALYs gained per patient. From the public payer perspective in China, clopidogrel plus ASA is associated with an incremental cost of 43,340 China Yuan (CNY) per QALY gained and 41,030 CNY per LY gained (discounting at 3.5% per year). PSA results demonstrated that 88% of simulations were lower than the cost-effectiveness threshold of 150,721 CYN per QALY gained. Based on the one-way sensitivity analysis, results are most sensitive to price of clopidogrel, but remain well below this threshold. This analysis suggests that treatment with clopidogrel plus ASA for up to 1 year for patients with NSTEACS is cost effective in the local context of China from a public payers' perspective. Sanofi China.

  10. Delineating parameter unidentifiabilities in complex models

    NASA Astrophysics Data System (ADS)

    Raman, Dhruva V.; Anderson, James; Papachristodoulou, Antonis

    2017-03-01

    Scientists use mathematical modeling as a tool for understanding and predicting the properties of complex physical systems. In highly parametrized models there often exist relationships between parameters over which model predictions are identical, or nearly identical. These are known as structural or practical unidentifiabilities, respectively. They are hard to diagnose and make reliable parameter estimation from data impossible. They furthermore imply the existence of an underlying model simplification. We describe a scalable method for detecting unidentifiabilities, as well as the functional relations defining them, for generic models. This allows for model simplification, and appreciation of which parameters (or functions thereof) cannot be estimated from data. Our algorithm can identify features such as redundant mechanisms and fast time-scale subsystems, as well as the regimes in parameter space over which such approximations are valid. We base our algorithm on a quantification of regional parametric sensitivity that we call `multiscale sloppiness'. Traditionally, the link between parametric sensitivity and the conditioning of the parameter estimation problem is made locally, through the Fisher information matrix. This is valid in the regime of infinitesimal measurement uncertainty. We demonstrate the duality between multiscale sloppiness and the geometry of confidence regions surrounding parameter estimates made where measurement uncertainty is non-negligible. Further theoretical relationships are provided linking multiscale sloppiness to the likelihood-ratio test. From this, we show that a local sensitivity analysis (as typically done) is insufficient for determining the reliability of parameter estimation, even with simple (non)linear systems. Our algorithm can provide a tractable alternative. We finally apply our methods to a large-scale, benchmark systems biology model of necrosis factor (NF)-κ B , uncovering unidentifiabilities.

  11. Modeling Nitrogen Dynamics in a Waste Stabilization Pond System Using Flexible Modeling Environment with MCMC.

    PubMed

    Mukhtar, Hussnain; Lin, Yu-Pin; Shipin, Oleg V; Petway, Joy R

    2017-07-12

    This study presents an approach for obtaining realization sets of parameters for nitrogen removal in a pilot-scale waste stabilization pond (WSP) system. The proposed approach was designed for optimal parameterization, local sensitivity analysis, and global uncertainty analysis of a dynamic simulation model for the WSP by using the R software package Flexible Modeling Environment (R-FME) with the Markov chain Monte Carlo (MCMC) method. Additionally, generalized likelihood uncertainty estimation (GLUE) was integrated into the FME to evaluate the major parameters that affect the simulation outputs in the study WSP. Comprehensive modeling analysis was used to simulate and assess nine parameters and concentrations of ON-N, NH₃-N and NO₃-N. Results indicate that the integrated FME-GLUE-based model, with good Nash-Sutcliffe coefficients (0.53-0.69) and correlation coefficients (0.76-0.83), successfully simulates the concentrations of ON-N, NH₃-N and NO₃-N. Moreover, the Arrhenius constant was the only parameter sensitive to model performances of ON-N and NH₃-N simulations. However, Nitrosomonas growth rate, the denitrification constant, and the maximum growth rate at 20 °C were sensitive to ON-N and NO₃-N simulation, which was measured using global sensitivity.

  12. Resting spontaneous baroreflex sensitivity and cardiac autonomic control in anabolic androgenic steroid users

    PubMed Central

    dos Santos, Marcelo R.; Sayegh, Ana L.C.; Armani, Rafael; Costa-Hong, Valéria; de Souza, Francis R.; Toschi-Dias, Edgar; Bortolotto, Luiz A.; Yonamine, Mauricio; Negrão, Carlos E.; Alves, Maria-Janieire N.N.

    2018-01-01

    OBJECTIVES: Misuse of anabolic androgenic steroids in athletes is a strategy used to enhance strength and skeletal muscle hypertrophy. However, its abuse leads to an imbalance in muscle sympathetic nerve activity, increased vascular resistance, and increased blood pressure. However, the mechanisms underlying these alterations are still unknown. Therefore, we tested whether anabolic androgenic steroids could impair resting baroreflex sensitivity and cardiac sympathovagal control. In addition, we evaluate pulse wave velocity to ascertain the arterial stiffness of large vessels. METHODS: Fourteen male anabolic androgenic steroid users and 12 nonusers were studied. Heart rate, blood pressure, and respiratory rate were recorded. Baroreflex sensitivity was estimated by the sequence method, and cardiac autonomic control by analysis of the R-R interval. Pulse wave velocity was measured using a noninvasive automatic device. RESULTS: Mean spontaneous baroreflex sensitivity, baroreflex sensitivity to activation of the baroreceptors, and baroreflex sensitivity to deactivation of the baroreceptors were significantly lower in users than in nonusers. In the spectral analysis of heart rate variability, high frequency activity was lower, while low frequency activity was higher in users than in nonusers. Moreover, the sympathovagal balance was higher in users. Users showed higher pulse wave velocity than nonusers showing arterial stiffness of large vessels. Single linear regression analysis showed significant correlations between mean blood pressure and baroreflex sensitivity and pulse wave velocity. CONCLUSIONS: Our results provide evidence for lower baroreflex sensitivity and sympathovagal imbalance in anabolic androgenic steroid users. Moreover, anabolic androgenic steroid users showed arterial stiffness. Together, these alterations might be the mechanisms triggering the increased blood pressure in this population. PMID:29791601

  13. Resting spontaneous baroreflex sensitivity and cardiac autonomic control in anabolic androgenic steroid users.

    PubMed

    Santos, Marcelo R Dos; Sayegh, Ana L C; Armani, Rafael; Costa-Hong, Valéria; Souza, Francis R de; Toschi-Dias, Edgar; Bortolotto, Luiz A; Yonamine, Mauricio; Negrão, Carlos E; Alves, Maria-Janieire N N

    2018-05-21

    Misuse of anabolic androgenic steroids in athletes is a strategy used to enhance strength and skeletal muscle hypertrophy. However, its abuse leads to an imbalance in muscle sympathetic nerve activity, increased vascular resistance, and increased blood pressure. However, the mechanisms underlying these alterations are still unknown. Therefore, we tested whether anabolic androgenic steroids could impair resting baroreflex sensitivity and cardiac sympathovagal control. In addition, we evaluate pulse wave velocity to ascertain the arterial stiffness of large vessels. Fourteen male anabolic androgenic steroid users and 12 nonusers were studied. Heart rate, blood pressure, and respiratory rate were recorded. Baroreflex sensitivity was estimated by the sequence method, and cardiac autonomic control by analysis of the R-R interval. Pulse wave velocity was measured using a noninvasive automatic device. Mean spontaneous baroreflex sensitivity, baroreflex sensitivity to activation of the baroreceptors, and baroreflex sensitivity to deactivation of the baroreceptors were significantly lower in users than in nonusers. In the spectral analysis of heart rate variability, high frequency activity was lower, while low frequency activity was higher in users than in nonusers. Moreover, the sympathovagal balance was higher in users. Users showed higher pulse wave velocity than nonusers showing arterial stiffness of large vessels. Single linear regression analysis showed significant correlations between mean blood pressure and baroreflex sensitivity and pulse wave velocity. Our results provide evidence for lower baroreflex sensitivity and sympathovagal imbalance in anabolic androgenic steroid users. Moreover, anabolic androgenic steroid users showed arterial stiffness. Together, these alterations might be the mechanisms triggering the increased blood pressure in this population.

  14. Sensitivity analysis of Jacobian determinant used in treatment planning for lung cancer

    NASA Astrophysics Data System (ADS)

    Shao, Wei; Gerard, Sarah E.; Pan, Yue; Patton, Taylor J.; Reinhardt, Joseph M.; Durumeric, Oguz C.; Bayouth, John E.; Christensen, Gary E.

    2018-03-01

    Four-dimensional computed tomography (4DCT) is regularly used to visualize tumor motion in radiation therapy for lung cancer. These 4DCT images can be analyzed to estimate local ventilation by finding a dense correspondence map between the end inhalation and the end exhalation CT image volumes using deformable image registration. Lung regions with ventilation values above a threshold are labeled as regions of high pulmonary function and are avoided when possible in the radiation plan. This paper investigates a sensitivity analysis of the relative Jacobian error to small registration errors. We present a linear approximation of the relative Jacobian error. Next, we give a formula for the sensitivity of the relative Jacobian error with respect to the Jacobian of perturbation displacement field. Preliminary sensitivity analysis results are presented using 4DCT scans from 10 individuals. For each subject, we generated 6400 random smooth biologically plausible perturbation vector fields using a cubic B-spline model. We showed that the correlation between the Jacobian determinant and the Frobenius norm of the sensitivity matrix is close to -1, which implies that the relative Jacobian error in high-functional regions is less sensitive to noise. We also showed that small displacement errors on the average of 0.53 mm may lead to a 10% relative change in Jacobian determinant. We finally showed that the average relative Jacobian error and the sensitivity of the system for all subjects are positively correlated (close to +1), i.e. regions with high sensitivity has more error in Jacobian determinant on average.

  15. Decision analysis with cumulative prospect theory.

    PubMed

    Bayoumi, A M; Redelmeier, D A

    2000-01-01

    Individuals sometimes express preferences that do not follow expected utility theory. Cumulative prospect theory adjusts for some phenomena by using decision weights rather than probabilities when analyzing a decision tree. The authors examined how probability transformations from cumulative prospect theory might alter a decision analysis of a prophylactic therapy in AIDS, eliciting utilities from patients with HIV infection (n = 75) and calculating expected outcomes using an established Markov model. They next focused on transformations of three sets of probabilities: 1) the probabilities used in calculating standard-gamble utility scores; 2) the probabilities of being in discrete Markov states; 3) the probabilities of transitioning between Markov states. The same prophylaxis strategy yielded the highest quality-adjusted survival under all transformations. For the average patient, prophylaxis appeared relatively less advantageous when standard-gamble utilities were transformed. Prophylaxis appeared relatively more advantageous when state probabilities were transformed and relatively less advantageous when transition probabilities were transformed. Transforming standard-gamble and transition probabilities simultaneously decreased the gain from prophylaxis by almost half. Sensitivity analysis indicated that even near-linear probability weighting transformations could substantially alter quality-adjusted survival estimates. The magnitude of benefit estimated in a decision-analytic model can change significantly after using cumulative prospect theory. Incorporating cumulative prospect theory into decision analysis can provide a form of sensitivity analysis and may help describe when people deviate from expected utility theory.

  16. Orbit/attitude estimation with LANDSAT Landmark data

    NASA Technical Reports Server (NTRS)

    Hall, D. L.; Waligora, S.

    1979-01-01

    The use of LANDSAT landmark data for orbit/attitude and camera bias estimation was studied. The preliminary results of these investigations are presented. The Goddard Trajectory Determination System (GTDS) error analysis capability was used to perform error analysis studies. A number of questions were addressed including parameter observability and sensitivity, effects on the solve-for parameter errors of data span, density, and distribution an a priori covariance weighting. The use of the GTDS differential correction capability with acutal landmark data was examined. The rms line and element observation residuals were studied as a function of the solve-for parameter set, a priori covariance weighting, force model, attitude model and data characteristics. Sample results are presented. Finally, verfication and preliminary system evaluation of the LANDSAT NAVPAK system for sequential (extended Kalman Filter) estimation of orbit, and camera bias parameters is given.

  17. A methodology for airplane parameter estimation and confidence interval determination in nonlinear estimation problems. Ph.D. Thesis - George Washington Univ., Apr. 1985

    NASA Technical Reports Server (NTRS)

    Murphy, P. C.

    1986-01-01

    An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. With the fitted surface, sensitivity information can be updated at each iteration with less computational effort than that required by either a finite-difference method or integration of the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, and thus provides flexibility to use model equations in any convenient format. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. The degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels and to predict the degree of agreement between CR bounds and search estimates.

  18. Revisiting the cost-effectiveness of universal cervical length screening: importance of progesterone efficacy.

    PubMed

    Jain, Siddharth; Kilgore, Meredith; Edwards, Rodney K; Owen, John

    2016-07-01

    Preterm birth (PTB) is a significant cause of neonatal morbidity and mortality. Studies have shown that vaginal progesterone therapy for women diagnosed with shortened cervical length can reduce the risk of PTB. However, published cost-effectiveness analyses of vaginal progesterone for short cervix have not considered an appropriate range of clinically important parameters. To evaluate the cost-effectiveness of universal cervical length screening in women without a history of spontaneous PTB, assuming that all women with shortened cervical length receive progesterone to reduce the likelihood of PTB. A decision analysis model was developed to compare universal screening and no-screening strategies. The primary outcome was the cost-effectiveness ratio of both the strategies, defined as the estimated patient cost per quality-adjusted life-year (QALY) realized by the children. One-way sensitivity analyses were performed by varying progesterone efficacy to prevent PTB. A probabilistic sensitivity analysis was performed to address uncertainties in model parameter estimates. In our base-case analysis, assuming that progesterone reduces the likelihood of PTB by 11%, the incremental cost-effectiveness ratio for screening was $158,000/QALY. Sensitivity analyses show that these results are highly sensitive to the presumed efficacy of progesterone to prevent PTB. In a 1-way sensitivity analysis, screening results in cost-saving if progesterone can reduce PTB by 36%. Additionally, for screening to be cost-effective at WTP=$60,000 in three clinical scenarios, progesterone therapy has to reduce PTB by 60%, 34% and 93%. Screening is never cost-saving in the worst-case scenario or when serial ultrasounds are employed, but could be cost-saving with a two-day hospitalization only if progesterone were 64% effective. Cervical length screening and treatment with progesterone is a not a dominant, cost-effective strategy unless progesterone is more effective than has been suggested by available data for US women. Until future trials demonstrate greater progesterone efficacy, and effectiveness studies confirm a benefit from screening and treatment, the cost-effectiveness of universal cervical length screening in the United States remains questionable. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. The management of patients with T1 adenocarcinoma of the low rectum: a decision analysis.

    PubMed

    Johnston, Calvin F; Tomlinson, George; Temple, Larissa K; Baxter, Nancy N

    2013-04-01

    Decision making for patients with T1 adenocarcinoma of the low rectum, when treatment options are limited to a transanal local excision or abdominoperineal resection, is challenging. The aim of this study was to develop a contemporary decision analysis to assist patients and clinicians in balancing the goals of maximizing life expectancy and quality of life in this situation. We constructed a Markov-type microsimulation in open-source software. Recurrence rates and quality-of-life parameters were elicited by systematic literature reviews. Sensitivity analyses were performed on key model parameters. Our base case for analysis was a 65-year-old man with low-lying T1N0 rectal cancer. We determined the sensitivity of our model for sex, age up to 80, and T stage. The main outcome measured was quality-adjusted life-years. In the base case, selecting transanal local excision over abdominoperineal resection resulted in a loss of 0.53 years of life expectancy but a gain of 0.97 quality-adjusted life-years. One-way sensitivity analysis demonstrated a health state utility value threshold for permanent colostomy of 0.93. This value ranged from 0.88 to 1.0 based on tumor recurrence risk. There were no other model sensitivities. Some model parameter estimates were based on weak data. In our model, transanal local excision was found to be the preferable approach for most patients. An abdominoperineal resection has a 3.5% longer life expectancy, but this advantage is lost when the quality-of-life reduction reported by stoma patients is weighed in. The minority group in whom abdominoperineal resection is preferred are those who are unwilling to sacrifice 7% of their life expectancy to avoid a permanent stoma. This is estimated to be approximately 25% of all patients. The threshold increases to 12% of life expectancy in high-risk tumors. No other factors are found to be relevant to the decision.

  20. Cost effectiveness of OptiMal® rapid diagnostic test for malaria in remote areas of the Amazon Region, Brazil

    PubMed Central

    2010-01-01

    Background In areas with limited structure in place for microscopy diagnosis, rapid diagnostic tests (RDT) have been demonstrated to be effective. Method The cost-effectiveness of the Optimal® and thick smear microscopy was estimated and compared. Data were collected on remote areas of 12 municipalities in the Brazilian Amazon. Data sources included the National Malaria Control Programme of the Ministry of Health, the National Healthcare System reimbursement table, hospitalization records, primary data collected from the municipalities, and scientific literature. The perspective was that of the Brazilian public health system, the analytical horizon was from the start of fever until the diagnostic results provided to patient and the temporal reference was that of year 2006. The results were expressed in costs per adequately diagnosed cases in 2006 U.S. dollars. Sensitivity analysis was performed considering key model parameters. Results In the case base scenario, considering 92% and 95% sensitivity for thick smear microscopy to Plasmodium falciparum and Plasmodium vivax, respectively, and 100% specificity for both species, thick smear microscopy is more costly and more effective, with an incremental cost estimated at US$549.9 per adequately diagnosed case. In sensitivity analysis, when sensitivity and specificity of microscopy for P. vivax were 0.90 and 0.98, respectively, and when its sensitivity for P. falciparum was 0.83, the RDT was more cost-effective than microscopy. Conclusion Microscopy is more cost-effective than OptiMal® in these remote areas if high accuracy of microscopy is maintained in the field. Decision regarding use of rapid tests for diagnosis of malaria in these areas depends on current microscopy accuracy in the field. PMID:20937094

  1. Arecibo Pulsar Survey Using ALFA. IV. Mock Spectrometer Data Analysis, Survey Sensitivity, and the Discovery of 40 Pulsars

    NASA Astrophysics Data System (ADS)

    Lazarus, P.; Brazier, A.; Hessels, J. W. T.; Karako-Argaman, C.; Kaspi, V. M.; Lynch, R.; Madsen, E.; Patel, C.; Ransom, S. M.; Scholz, P.; Swiggum, J.; Zhu, W. W.; Allen, B.; Bogdanov, S.; Camilo, F.; Cardoso, F.; Chatterjee, S.; Cordes, J. M.; Crawford, F.; Deneva, J. S.; Ferdman, R.; Freire, P. C. C.; Jenet, F. A.; Knispel, B.; Lee, K. J.; van Leeuwen, J.; Lorimer, D. R.; Lyne, A. G.; McLaughlin, M. A.; Siemens, X.; Spitler, L. G.; Stairs, I. H.; Stovall, K.; Venkataraman, A.

    2015-10-01

    The on-going Arecibo Pulsar-ALFA (PALFA) survey began in 2004 and is searching for radio pulsars in the Galactic plane at 1.4 GHz. Here we present a comprehensive description of one of its main data reduction pipelines that is based on the PRESTO software and includes new interference-excision algorithms and candidate selection heuristics. This pipeline has been used to discover 40 pulsars, bringing the survey’s discovery total to 144 pulsars. Of the new discoveries, eight are millisecond pulsars (MSPs; P\\lt 10 ms) and one is a Fast Radio Burst (FRB). This pipeline has also re-detected 188 previously known pulsars, 60 of them previously discovered by the other PALFA pipelines. We present a novel method for determining the survey sensitivity that accurately takes into account the effects of interference and red noise: we inject synthetic pulsar signals with various parameters into real survey observations and then attempt to recover them with our pipeline. We find that the PALFA survey achieves the sensitivity to MSPs predicted by theoretical models but suffers a degradation for P≳ 100 ms that gradually becomes up to ˜10 times worse for P\\gt 4 {{s}} at {DM}\\lt 150 pc cm-3. We estimate 33 ± 3% of the slower pulsars are missed, largely due to red noise. A population synthesis analysis using the sensitivity limits we measured suggests the PALFA survey should have found 224 ± 16 un-recycled pulsars in the data set analyzed, in agreement with the 241 actually detected. The reduced sensitivity could have implications on estimates of the number of long-period pulsars in the Galaxy.

  2. Cost-effectiveness of prucalopride in the treatment of chronic constipation in the Netherlands

    PubMed Central

    Nuijten, Mark J. C.; Dubois, Dominique J.; Joseph, Alain; Annemans, Lieven

    2015-01-01

    Objective: To assess the cost-effectiveness of prucalopride vs. continued laxative treatment for chronic constipation in patients in the Netherlands in whom laxatives have failed to provide adequate relief. Methods: A Markov model was developed to estimate the cost-effectiveness of prucalopride in patients with chronic constipation receiving standard laxative treatment from the perspective of Dutch payers in 2011. Data sources included published prucalopride clinical trials, published Dutch price/tariff lists, and national population statistics. The model simulated the clinical and economic outcomes associated with prucalopride vs. standard treatment and had a cycle length of 1 month and a follow-up time of 1 year. Response to treatment was defined as the proportion of patients who achieved “normal bowel function”. One-way and probabilistic sensitivity analyses were conducted to test the robustness of the base case. Results: In the base case analysis, the cost of prucalopride relative to continued laxative treatment was € 9015 per quality-adjusted life-year (QALY). Extensive sensitivity analyses and scenario analyses confirmed that the base case cost-effectiveness estimate was robust. One-way sensitivity analyses showed that the model was most sensitive in response to prucalopride; incremental cost-effectiveness ratios ranged from € 6475 to 15,380 per QALY. Probabilistic sensitivity analyses indicated that there is a greater than 80% probability that prucalopride would be cost-effective compared with continued standard treatment, assuming a willingness-to-pay threshold of € 20,000 per QALY from a Dutch societal perspective. A scenario analysis was performed for women only, which resulted in a cost-effectiveness ratio of € 7773 per QALY. Conclusion: Prucalopride was cost-effective in a Dutch patient population, as well as in a women-only subgroup, who had chronic constipation and who obtained inadequate relief from laxatives. PMID:25926794

  3. Novel design and sensitivity analysis of displacement measurement system utilizing knife edge diffraction for nanopositioning stages.

    PubMed

    Lee, ChaBum; Lee, Sun-Kyu; Tarbutton, Joshua A

    2014-09-01

    This paper presents a novel design and sensitivity analysis of a knife edge-based optical displacement sensor that can be embedded with nanopositioning stages. The measurement system consists of a laser, two knife edge locations, two photodetectors, and axillary optics components in a simple configuration. The knife edge is installed on the stage parallel to its moving direction and two separated laser beams are incident on knife edges. While the stage is in motion, the direct transverse and diffracted light at each knife edge is superposed producing interference at the detector. The interference is measured with two photodetectors in a differential amplification configuration. The performance of the proposed sensor was mathematically modeled, and the effect of the optical and mechanical parameters, wavelength, beam diameter, distances from laser to knife edge to photodetector, and knife edge topography, on sensor outputs was investigated to obtain a novel analytical method to predict linearity and sensitivity. From the model, all parameters except for the beam diameter have a significant influence on measurement range and sensitivity of the proposed sensing system. To validate the model, two types of knife edges with different edge topography were used for the experiment. By utilizing a shorter wavelength, smaller sensor distance and higher edge quality increased measurement sensitivity can be obtained. The model was experimentally validated and the results showed a good agreement with the theoretically estimated results. This sensor is expected to be easily implemented into nanopositioning stage applications at a low cost and mathematical model introduced here can be used for design and performance estimation of the knife edge-based sensor as a tool.

  4. Sensitivity Analysis and Parameter Estimation for a Reactive Transport Model of Uranium Bioremediation

    NASA Astrophysics Data System (ADS)

    Meyer, P. D.; Yabusaki, S.; Curtis, G. P.; Ye, M.; Fang, Y.

    2011-12-01

    A three-dimensional, variably-saturated flow and multicomponent biogeochemical reactive transport model of uranium bioremediation was used to generate synthetic data . The 3-D model was based on a field experiment at the U.S. Dept. of Energy Rifle Integrated Field Research Challenge site that used acetate biostimulation of indigenous metal reducing bacteria to catalyze the conversion of aqueous uranium in the +6 oxidation state to immobile solid-associated uranium in the +4 oxidation state. A key assumption in past modeling studies at this site was that a comprehensive reaction network could be developed largely through one-dimensional modeling. Sensitivity analyses and parameter estimation were completed for a 1-D reactive transport model abstracted from the 3-D model to test this assumption, to identify parameters with the greatest potential to contribute to model predictive uncertainty, and to evaluate model structure and data limitations. Results showed that sensitivities of key biogeochemical concentrations varied in space and time, that model nonlinearities and/or parameter interactions have a significant impact on calculated sensitivities, and that the complexity of the model's representation of processes affecting Fe(II) in the system may make it difficult to correctly attribute observed Fe(II) behavior to modeled processes. Non-uniformity of the 3-D simulated groundwater flux and averaging of the 3-D synthetic data for use as calibration targets in the 1-D modeling resulted in systematic errors in the 1-D model parameter estimates and outputs. This occurred despite using the same reaction network for 1-D modeling as used in the data-generating 3-D model. Predictive uncertainty of the 1-D model appeared to be significantly underestimated by linear parameter uncertainty estimates.

  5. A cautionary note on Bayesian estimation of population size by removal sampling with diffuse priors.

    PubMed

    Bord, Séverine; Bioche, Christèle; Druilhet, Pierre

    2018-05-01

    We consider the problem of estimating a population size by removal sampling when the sampling rate is unknown. Bayesian methods are now widespread and allow to include prior knowledge in the analysis. However, we show that Bayes estimates based on default improper priors lead to improper posteriors or infinite estimates. Similarly, weakly informative priors give unstable estimators that are sensitive to the choice of hyperparameters. By examining the likelihood, we show that population size estimates can be stabilized by penalizing small values of the sampling rate or large value of the population size. Based on theoretical results and simulation studies, we propose some recommendations on the choice of the prior. Then, we applied our results to real datasets. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Estimation of regional differences in wind erosion sensitivity in Hungary

    NASA Astrophysics Data System (ADS)

    Mezősi, G.; Blanka, V.; Bata, T.; Kovács, F.; Meyer, B.

    2015-01-01

    In Hungary, wind erosion is one of the most serious natural hazards. Spatial and temporal variation in the factors that determine the location and intensity of wind erosion damage are not well known, nor are the regional and local sensitivities to erosion. Because of methodological challenges, no multi-factor, regional wind erosion sensitivity map is available for Hungary. The aim of this study was to develop a method to estimate the regional differences in wind erosion sensitivity and exposure in Hungary. Wind erosion sensitivity was modelled using the key factors of soil sensitivity, vegetation cover and wind erodibility as proxies. These factors were first estimated separately by factor sensitivity maps and later combined by fuzzy logic into a regional-scale wind erosion sensitivity map. Large areas were evaluated by using publicly available data sets of remotely sensed vegetation information, soil maps and meteorological data on wind speed. The resulting estimates were verified by field studies and examining the economic losses from wind erosion as compensated by the state insurance company. The spatial resolution of the resulting sensitivity map is suitable for regional applications, as identifying sensitive areas is the foundation for diverse land development control measures and implementing management activities.

  7. Sensitivity of quantitative groundwater recharge estimates to volumetric and distribution uncertainty in rainfall forcing products

    NASA Astrophysics Data System (ADS)

    Werner, Micha; Westerhoff, Rogier; Moore, Catherine

    2017-04-01

    Quantitative estimates of recharge due to precipitation excess are an important input to determining sustainable abstraction of groundwater resources, as well providing one of the boundary conditions required for numerical groundwater modelling. Simple water balance models are widely applied for calculating recharge. In these models, precipitation is partitioned between different processes and stores; including surface runoff and infiltration, storage in the unsaturated zone, evaporation, capillary processes, and recharge to groundwater. Clearly the estimation of recharge amounts will depend on the estimation of precipitation volumes, which may vary, depending on the source of precipitation data used. However, the partitioning between the different processes is in many cases governed by (variable) intensity thresholds. This means that the estimates of recharge will not only be sensitive to input parameters such as soil type, texture, land use, potential evaporation; but mainly to the precipitation volume and intensity distribution. In this paper we explore the sensitivity of recharge estimates due to difference in precipitation volumes and intensity distribution in the rainfall forcing over the Canterbury region in New Zealand. We compare recharge rates and volumes using a simple water balance model that is forced using rainfall and evaporation data from; the NIWA Virtual Climate Station Network (VCSN) data (which is considered as the reference dataset); the ERA-Interim/WATCH dataset at 0.25 degrees and 0.5 degrees resolution; the TRMM-3B42 dataset; the CHIRPS dataset; and the recently releases MSWEP dataset. Recharge rates are calculated at a daily time step over the 14 year period from the 2000 to 2013 for the full Canterbury region, as well as at eight selected points distributed over the region. Lysimeter data with observed estimates of recharge are available at four of these points, as well as recharge estimates from the NGRM model, an independent model constructed using the same base data and forced with the VCSN precipitation dataset. Results of the comparison of the rainfall products show that there are significant differences in precipitation volume between the forcing products; in the order of 20% at most points. Even more significant differences can be seen, however, in the distribution of precipitation. For the VCSN data wet days (defined as >0.1mm precipitation) occur on some 20-30% of days (depending on location). This is reasonably reflected in the TRMM and CHIRPS data, while for the re-analysis based products some 60%to 80% of days are wet, albeit at lower intensities. These differences are amplified in the recharge estimates. At most points, volumetric differences are in the order of 40-60%, though difference may range into several orders of magnitude. The frequency distributions of recharge also differ significantly, with recharge over 0.1 mm occurring on 4-6% of days for the VCNS, CHIRPS, and TRMM datasets, but up to the order of 12% of days for the re-analysis data. Comparison against the lysimeter data show estimates to be reasonable, in particular for the reference datasets. Surprisingly some estimates of the lower resolution re-analysis datasets are reasonable, though this does seem to be due to lower recharge being compensated by recharge occurring more frequently. These results underline the importance of correct representation of rainfall volumes, as well as of distribution, particularly when evaluating possible changes to for example changes in precipitation intensity and volume. This holds for precipitation data derived from satellite based and re-analysis products, but also for interpolated data from gauges, where the distribution of intensities is strongly influenced by the interpolation process.

  8. Shot noise-limited Cramér-Rao bound and algorithmic sensitivity for wavelength shifting interferometry

    NASA Astrophysics Data System (ADS)

    Chen, Shichao; Zhu, Yizheng

    2017-02-01

    Sensitivity is a critical index to measure the temporal fluctuation of the retrieved optical pathlength in quantitative phase imaging system. However, an accurate and comprehensive analysis for sensitivity evaluation is still lacking in current literature. In particular, previous theoretical studies for fundamental sensitivity based on Gaussian noise models are not applicable to modern cameras and detectors, which are dominated by shot noise. In this paper, we derive two shot noiselimited theoretical sensitivities, Cramér-Rao bound and algorithmic sensitivity for wavelength shifting interferometry, which is a major category of on-axis interferometry techniques in quantitative phase imaging. Based on the derivations, we show that the shot noise-limited model permits accurate estimation of theoretical sensitivities directly from measured data. These results can provide important insights into fundamental constraints in system performance and can be used to guide system design and optimization. The same concepts can be generalized to other quantitative phase imaging techniques as well.

  9. [Medical and economic evaluation of donated blood screening for hepatitis C and non-A, non-B, non-C hepatitis].

    PubMed

    Vergnon, P; Colin, C; Jullien, A M; Bory, E; Excoffier, S; Matillon, Y; Trepo, C

    1996-01-01

    The aim of this study was to evaluate the cost of hepatitis C and non-A non-B non-C screening strategy in donated blood, currently used in French transfusion centres and to assess the effect in the blood transfusion centres according to the prevalence of the disease and the intrinsec values of tests. This screening strategy was based on alanine aminotransferase assay, and HBc and HCV antibodies detection. In 1993, a survey was conducted in 26 French transfusion centers to estimate the costs of the screening strategy currently used. Average expenditure on diagnostic sets, equipment, staff and administration charges for hepatitis C and non-A non-B non-C screening were calculated. From these results, we estimated the cost of the previous strategy which did not involve HCV antibody testing, so as to determine the incremental cost between the two strategies. We used clinical decision analysis and sensitivity analysis to estimate the incremental cost-effectiveness ratio with data gathered from the literature and examine the impact on blood transfusion centre. Implemented for 100,000 volunteer blood donations, the incremental cost of the new strategy was FF 2,566,111 (1992) and the marginal effectiveness was 180 additional infected donations detected. The sensitivity analysis showed the major influence of infection prevalence in donated blood on the incremental cost-effectiveness ratio: the lower the prevalence, the higher the cost-effectiveness ratio per contaminated blood product avoided.

  10. An Algorithm for Efficient Maximum Likelihood Estimation and Confidence Interval Determination in Nonlinear Estimation Problems

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick Charles

    1985-01-01

    An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The algorithm was developed for airplane parameter estimation problems but is well suited for most nonlinear, multivariable, dynamic systems. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort. MNRES determines the sensitivities with less computational effort than using either a finite-difference method or integrating the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, thus eliminating algorithm reformulation with each new model and providing flexibility to use model equations in any format that is convenient. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. It is observed that the degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. The CR bounds were found to be close to the bounds determined by the search when the degree of nonlinearity was small. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels for the parameter confidence limits. The primary utility of the measure, however, was found to be in predicting the degree of agreement between Cramer-Rao bounds and search estimates.

  11. Cost-analysis of an oral health outreach program for preschool children in a low socioeconomic multicultural area in Sweden.

    PubMed

    Wennhall, Inger; Norlund, Anders; Matsson, Lars; Twetman, Svante

    2010-01-01

    The aim was to calculate the total and the net costs per child included in a 3-year caries preventive program for preschool children and to make estimates of expected lowest and highest costs in a sensitivity analysis. The direct costs for prevention and dental care were applied retrospectively to a comprehensive oral health outreach project for preschool children conducted in a low-socioeconomic multi-cultural urban area. The outcome was compared with historical controls from the same area with conventional dental care. The cost per minute for the various dental professions was added to the cost of materials, rental facilities and equipment based on accounting data. The cost for fillings was extracted from a specified per diem list. Overhead costs were assumed to correspond to 50% of salaries and all costs were calculated as net present value per participating child in the program and expressed in Euro. The results revealed an estimated total cost of 310 Euro per included child (net present value) in the 3-year program. Half of the costs were attributed to the first year of the program and the costs of manpower constituted 45% of the total costs. When the total cost was reduced with the cost of conventional care and the revenue of avoided fillings, the net cost was estimated to 30 Euro. A sensitivity analysis displayed that a net gain could be possible with a maximal outcome of the program. In conclusion, the estimated net costs were displayed and available to those considering implementation of a similar population-based preventive program in areas where preschool children are at high caries risk.

  12. Identifying indicators of illegal behaviour: carnivore killing in human-managed landscapes.

    PubMed

    St John, Freya A V; Keane, Aidan M; Edwards-Jones, Gareth; Jones, Lauren; Yarnell, Richard W; Jones, Julia P G

    2012-02-22

    Managing natural resources often depends on influencing people's behaviour, however effectively targeting interventions to discourage environmentally harmful behaviours is challenging because those involved may be unwilling to identify themselves. Non-sensitive indicators of sensitive behaviours are therefore needed. Previous studies have investigated people's attitudes, assuming attitudes reflect behaviour. There has also been interest in using people's estimates of the proportion of their peers involved in sensitive behaviours to identify those involved, since people tend to assume that others behave like themselves. However, there has been little attempt to test the potential of such indicators. We use the randomized response technique (RRT), designed for investigating sensitive behaviours, to estimate the proportion of farmers in north-eastern South Africa killing carnivores, and use a modified logistic regression model to explore relationships between our best estimates of true behaviour (from RRT) and our proposed non-sensitive indicators (including farmers' attitudes, and estimates of peer-behaviour). Farmers' attitudes towards carnivores, question sensitivity and estimates of peers' behaviour, predict the likelihood of farmers killing carnivores. Attitude and estimates of peer-behaviour are useful indicators of involvement in illicit behaviours and may be used to identify groups of people to engage in interventions aimed at changing behaviour.

  13. Dynamical Analysis of an SEIT Epidemic Model with Application to Ebola Virus Transmission in Guinea.

    PubMed

    Li, Zhiming; Teng, Zhidong; Feng, Xiaomei; Li, Yingke; Zhang, Huiguo

    2015-01-01

    In order to investigate the transmission mechanism of the infectious individual with Ebola virus, we establish an SEIT (susceptible, exposed in the latent period, infectious, and treated/recovery) epidemic model. The basic reproduction number is defined. The mathematical analysis on the existence and stability of the disease-free equilibrium and endemic equilibrium is given. As the applications of the model, we use the recognized infectious and death cases in Guinea to estimate parameters of the model by the least square method. With suitable parameter values, we obtain the estimated value of the basic reproduction number and analyze the sensitivity and uncertainty property by partial rank correlation coefficients.

  14. An investigation of new methods for estimating parameter sensitivities

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1988-01-01

    Parameter sensitivity is defined as the estimation of changes in the modeling functions and the design variables due to small changes in the fixed parameters of the formulation. There are currently several methods for estimating parameter sensitivities requiring either difficult to obtain second order information, or do not return reliable estimates for the derivatives. Additionally, all the methods assume that the set of active constraints does not change in a neighborhood of the estimation point. If the active set does in fact change, than any extrapolations based on these derivatives may be in error. The objective here is to investigate more efficient new methods for estimating parameter sensitivities when the active set changes. The new method is based on the recursive quadratic programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RPQ algorithm. Inital testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity. To handle changes in the active set, a deflection algorithm is proposed for those cases where the new set of active constraints remains linearly independent. For those cases where dependencies occur, a directional derivative is proposed. A few simple examples are included for the algorithm, but extensive testing has not yet been performed.

  15. Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arampatzis, Georgios, E-mail: garab@math.uoc.gr; Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003; Katsoulakis, Markos A., E-mail: markos@math.umass.edu

    2014-03-28

    In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-“coupled”- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that themore » new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz–Kalos–Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB source code.« less

  16. Inverse modeling and uncertainty analysis of potential groundwater recharge to the confined semi-fossil Ohangwena II Aquifer, Namibia

    NASA Astrophysics Data System (ADS)

    Wallner, Markus; Houben, Georg; Lohe, Christoph; Quinger, Martin; Himmelsbach, Thomas

    2017-12-01

    The identification of potential recharge areas and estimation of recharge rates to the confined semi-fossil Ohangwena II Aquifer (KOH-2) is crucial for its future sustainable use. The KOH-2 is located within the endorheic transboundary Cuvelai-Etosha-Basin (CEB), shared by Angola and Namibia. The main objective was the development of a strategy to tackle the problem of data scarcity, which is a well-known problem in semi-arid regions. In a first step, conceptual geological cross sections were created to illustrate the possible geological setting of the system. Furthermore, groundwater travel times were estimated by simple hydraulic calculations. A two-dimensional numerical groundwater model was set up to analyze flow patterns and potential recharge zones. The model was optimized against local observations of hydraulic heads and groundwater age. The sensitivity of the model against different boundary conditions and internal structures was tested. Parameter uncertainty and recharge rates were estimated. Results indicate that groundwater recharge to the KOH-2 mainly occurs from the Angolan Highlands in the northeastern part of the CEB. The sensitivity of the groundwater model to different internal structures is relatively small in comparison to changing boundary conditions in the form of influent or effluent streams. Uncertainty analysis underlined previous results, indicating groundwater recharge originating from the Angolan Highlands. The estimated recharge rates are less than 1% of mean yearly precipitation, which are reasonable for semi-arid regions.

  17. Why do we differ in number sense? Evidence from a genetically sensitive investigation☆

    PubMed Central

    Tosto, M.G.; Petrill, S.A.; Halberda, J.; Trzaskowski, M.; Tikhomirova, T.N.; Bogdanova, O.Y.; Ly, R.; Wilmer, J.B.; Naiman, D.Q.; Germine, L.; Plomin, R.; Kovas, Y.

    2014-01-01

    Basic intellectual abilities of quantity and numerosity estimation have been detected across animal species. Such abilities are referred to as ‘number sense’. For human species, individual differences in number sense are detectable early in life, persist in later development, and relate to general intelligence. The origins of these individual differences are unknown. To address this question, we conducted the first large-scale genetically sensitive investigation of number sense, assessing numerosity discrimination abilities in 837 pairs of monozygotic and 1422 pairs of dizygotic 16-year-old twin pairs. Univariate genetic analysis of the twin data revealed that number sense is modestly heritable (32%), with individual differences being largely explained by non-shared environmental influences (68%) and no contribution from shared environmental factors. Sex-Limitation model fitting revealed no differences between males and females in the etiology of individual differences in number sense abilities. We also carried out Genome-wide Complex Trait Analysis (GCTA) that estimates the population variance explained by additive effects of DNA differences among unrelated individuals. For 1118 unrelated individuals in our sample with genotyping information on 1.7 million DNA markers, GCTA estimated zero heritability for number sense, unlike other cognitive abilities in the same twin study where the GCTA heritability estimates were about 25%. The low heritability of number sense, observed in this study, is consistent with the directional selection explanation whereby additive genetic variance for evolutionary important traits is reduced. PMID:24696527

  18. Estimating the prevalence of heterozygous familial hypercholesterolaemia: a systematic review and meta-analysis

    PubMed Central

    Akioyamen, Leo E; Genest, Jacques; Shan, Shubham D; Reel, Rachel L; Albaum, Jordan M; Chu, Anna; Tu, Jack V

    2017-01-01

    Objectives Heterozygous familial hypercholesterolaemia (FH) confers a significant risk for premature cardiovascular disease (CVD). However, the estimated prevalence of FH varies substantially among studies. We aimed to provide a summary estimate of FH prevalence in the general population and assess variations in frequency across different sociodemographic characteristics. Setting, participants and outcome measures We searched MEDLINE, EMBASE, Global Health, the Cochrane Library, PsycINFO and PubMed for peer-reviewed literature using validated strategies. Results were limited to studies published in English between January 1990 and January 2017. Studies were eligible if they determined FH prevalence using clinical criteria or DNA-based analyses. We determined a pooled point prevalence of FH in adults and children and assessed the variation of the pooled frequency by age, sex, geographical location, diagnostic method, study quality and year of publication. Estimates were pooled using random-effects meta-analysis. Differences by study-level characteristics were investigated through subgroups, meta-regression and sensitivity analyses. Results The pooled prevalence of FH from 19 studies including 2 458 456 unique individuals was 0.40% (95% CI 0.29% to 0.52%) which corresponds to a frequency of 1 in 250 individuals. FH prevalence was found to vary by age and geographical location but not by any other covariates. Results were consistent in sensitivity analyses. Conclusions Our systematic review suggests that FH is a common disorder, affecting 1 in 250 individuals. These findings underscore the need for early detection and management to decrease CVD risk. PMID:28864697

  19. Ultrasonographic fetal head position to predict mode of delivery: a systematic review and bivariate meta-analysis.

    PubMed

    Verhoeven, C J M; Rückert, M E P F; Opmeer, B C; Pajkrt, E; Mol, B W J

    2012-07-01

    We performed a systematic review to determine whether sonographic assessment of occipital position of the fetal head can contribute to the prediction of the mode of delivery. We performed a systematic literature search of electronic databases from inception to May 2011. Two reviewers independently extracted data from the included studies. We used a bivariate model to estimate point estimates for sensitivity and specificity curves for the outcome Cesarean delivery. Eligible studies were cohort studies or cross-sectional studies that reported on both the position of the fetal head, as assessed by ultrasound, before or at the beginning of active labor as well as the outcome of labor in women at term. We included 11 primary articles reporting on 5053 women, of whom 898 had a Cesarean section. All studies indicated disappointing values for sensitivity and specificity in the prediction of Cesarean section. Summary point estimates of sensitivity and specificity were 0.39 (95% CI, 0.32-0.48) and 0.71 (95% CI, 0.67-0.74), respectively. Sonographic assessment of occipital position of the fetal head before delivery should not be used in the prediction of mode of delivery. Copyright © 2012 ISUOG. Published by John Wiley & Sons, Ltd.

  20. Testing alternative ground water models using cross-validation and other methods

    USGS Publications Warehouse

    Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.

    2007-01-01

    Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.

  1. Cost-Effectiveness of Dabigatran Compared to Vitamin-K Antagonists for the Treatment of Deep Venous Thrombosis in the Netherlands Using Real-World Data.

    PubMed

    van Leent, Merlijn W J; Stevanović, Jelena; Jansman, Frank G; Beinema, Maarten J; Brouwers, Jacobus R B J; Postma, Maarten J

    2015-01-01

    Vitamin-K antagonists (VKAs) present an effective anticoagulant treatment in deep venous thrombosis (DVT). However, the use of VKAs is limited because of the risk of bleeding and the necessity of frequent and long-term laboratory monitoring. Therefore, new oral anticoagulant drugs (NOACs) such as dabigatran, with lower rates of (major) intracranial bleeding compared to VKAs and not requiring monitoring, may be considered. To estimate resource utilization and costs of patients treated with the VKAs acenocoumarol and phenprocoumon, for the indication DVT. Furthermore, a formal cost-effectiveness analysis of dabigatran compared to VKAs for DVT treatment was performed, using these estimates. A retrospective observational study design in the thrombotic service of a teaching hospital (Deventer, The Netherlands) was applied to estimate real-world resource utilization and costs of VKA monitoring. A pooled analysis of data from RE-COVER and RE-COVER II on DVT was used to reflect the probabilities for events in the cost-effectiveness model. Dutch costs, utilities and specific data on coagulation monitoring levels were incorporated in the model. Next to the base case analysis, univariate probabilistic sensitivity and scenario analyses were performed. Real-world resource utilization in the thrombotic service of patients treated with VKA for the indication of DVT consisted of 12.3 measurements of the international normalized ratio (INR), with corresponding INR monitoring costs of €138 for a standardized treatment period of 180 days. In the base case, dabigatran treatment compared to VKAs in a cohort of 1,000 DVT patients resulted in savings of €18,900 (95% uncertainty interval (UI) -95,832, 151,162) and 41 (95% UI -18, 97) quality-adjusted life-years (QALYs) gained calculated from societal perspective. The probability that dabigatran is cost-effective at a conservative willingness-to pay threshold of €20,000 per QALY was 99%. Sensitivity and scenario analyses also indicated cost savings or cost-effectiveness below this same threshold. Total INR monitoring costs per patient were estimated at minimally €138. Inserting these real-world data into a cost-effectiveness analysis for patients diagnosed with DVT, dabigatran appeared to be a cost-saving alternative to VKAs in the Netherlands in the base case. Cost savings or favorable cost-effectiveness were robust in sensitivity and scenario analyses. Our results warrant confirmation in other settings and locations.

  2. Chapter 8: Demographic characteristics and population modeling

    Treesearch

    Scott H. Stoleson; Mary J. Whitfield; Mark K. Sogge

    2000-01-01

    An understanding of the basic demography of a species is necessary to estimate and evaluate population trends. The relative impact of different demographic parameters on growth rates can be assessed through a sensitivity analysis, in which different parameters are altered singly to assess the effect on population growth. Identification of critical parameters can allow...

  3. Field test and sensitivity analysis of a sensible heat balance method to determine ice contents

    USDA-ARS?s Scientific Manuscript database

    Soil ice content impacts winter vadose zone hydrology. It may be possible to estimate changes in soil ice content with a sensible heat balance (SHB) method, using measurements from heat pulse (HP) sensors. Feasibility of the SHB method is unknown because of difficulties in measuring soil thermal pro...

  4. Compilation and Preliminary Analysis of Sensitivity Data for Pyrotechnics. Phase 1

    DTIC Science & Technology

    1975-05-01

    700-2, except Green smoke (sulfur based) and match head Mix VI which were tested 6 and 11 times respectively. Optical pyrometer measurements of the...Photographic estimates indicated that an acoustic wave was formed during dust cloud fireball growth. 17 2. 2.3.11.5 Jet Airmix Blending. Bench

  5. Sensitivity of measuring the progress in financial risk protection to survey design and its socioeconomic and demographic determinants: A case study in Rwanda.

    PubMed

    Lu, Chunling; Liu, Kai; Li, Lingling; Yang, Yuhong

    2017-04-01

    Reliable and comparable information on households with catastrophic health expenditure (HCHE) is crucial for monitoring and evaluating our progress towards achieving universal financial risk protection. This study aims to investigate the sensitivity of measuring the progress in financial risk protection to survey design and its socioeconomic and demographic determinants. Using the Rwanda Integrated Living Conditions Survey in 2005 and 2010/2011, we derived the level and trend of the percentage of the HCHE using out-of-pocket health spending data derived from (1) a health module with a two-week recall period and six (2005)/seven (2010/2011) survey questions (Method 1) and (2) a consumption module with a four-week/ten-/12-month recall period and 11(2005)/24 (2010/2011) questions (Method 2). Using multilevel logistic regression analysis, we investigated the household socioeconomic and demographic characteristics that affected the sensitivity of estimating the HCHE to survey design. We found that Method 1 generated a significantly higher HCHE estimate (9.2%, 95% confidence interval 8.4%-10.0%) than Method2 (7.4%, 6.6%-8.1%) in 2005 and lower estimate (5.6%, 5.2%-6.1%) than Method 2 (8.2%, 7.6%-8.7%) in 2010/2011. The estimated trends of the HCHE using the two methods were not consistent between the two years. A household's size, its income quintile, having no under-five children, and educational level of its head were positively associated with the consistency of its HCHE status when using the two survey methods. Estimates of the progress in financial risk protection, especially among the most vulnerable households, are sensitive to survey design. These results are robust to various thresholds of catastrophic health spending. Future work must focus on mitigating survey effects through the development of statistical tools. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Carbon and water flux responses to physiology by environment interactions: a sensitivity analysis of variation in climate on photosynthetic and stomatal parameters

    NASA Astrophysics Data System (ADS)

    Bauerle, William L.; Daniels, Alex B.; Barnard, David M.

    2014-05-01

    Sensitivity of carbon uptake and water use estimates to changes in physiology was determined with a coupled photosynthesis and stomatal conductance ( g s) model, linked to canopy microclimate with a spatially explicit scheme (MAESTRA). The sensitivity analyses were conducted over the range of intraspecific physiology parameter variation observed for Acer rubrum L. and temperate hardwood C3 (C3) vegetation across the following climate conditions: carbon dioxide concentration 200-700 ppm, photosynthetically active radiation 50-2,000 μmol m-2 s-1, air temperature 5-40 °C, relative humidity 5-95 %, and wind speed at the top of the canopy 1-10 m s-1. Five key physiological inputs [quantum yield of electron transport ( α), minimum stomatal conductance ( g 0), stomatal sensitivity to the marginal water cost of carbon gain ( g 1), maximum rate of electron transport ( J max), and maximum carboxylation rate of Rubisco ( V cmax)] changed carbon and water flux estimates ≥15 % in response to climate gradients; variation in α, J max, and V cmax input resulted in up to ~50 and 82 % intraspecific and C3 photosynthesis estimate output differences respectively. Transpiration estimates were affected up to ~46 and 147 % by differences in intraspecific and C3 g 1 and g 0 values—two parameters previously overlooked in modeling land-atmosphere carbon and water exchange. We show that a variable environment, within a canopy or along a climate gradient, changes the spatial parameter effects of g 0, g 1, α, J max, and V cmax in photosynthesis- g s models. Since variation in physiology parameter input effects are dependent on climate, this approach can be used to assess the geographical importance of key physiology model inputs when estimating large scale carbon and water exchange.

  7. Sensitivity of measuring the progress in financial risk protection to survey design and its socioeconomic and demographic determinants: A case study in Rwanda

    PubMed Central

    Lu, Chunling; Liu, Kai; Li, Lingling; Yang, Yuhong

    2017-01-01

    Reliable and comparable information on households with catastrophic health expenditure (HCHE) is crucial for monitoring and evaluating our progress towards achieving universal financial risk protection. This study aims to investigate the sensitivity of measuring the progress in financial risk protection to survey design and its socioeconomic and demographic determinants. Using the Rwanda Integrated Living Conditions Survey in 2005 and 2010/2011, we derived the level and trend of the percentage of the HCHE using out-of-pocket health spending data derived from (1) a health module with a two-week recall period and six (2005)/seven (2010/2011) survey questions (Method 1) and (2) a consumption module with a four-week/ten-/12-month recall period and 11(2005)/24 (2010/2011) questions (Method 2). Using multilevel logistic regression analysis, we investigated the household socioeconomic and demographic characteristics that affected the sensitivity of estimating the HCHE to survey design. We found that Method 1 generated a significantly higher HCHE estimate (9.2%, 95% confidence interval 8.4%–10.0%) than Method2 (7.4%, 6.6%–8.1%) in 2005 and lower estimate (5.6%, 5.2%–6.1%) than Method 2 (8.2%, 7.6% –8.7%) in 2010/2011. The estimated trends of the HCHE using the two methods were not consistent between the two years. A household's size, its income quintile, having no under-five children, and educational level of its head were positively associated with the consistency of its HCHE status when using the two survey methods. Estimates of the progress in financial risk protection, especially among the most vulnerable households, are sensitive to survey design. These results are robust to various thresholds of catastrophic health spending. Future work must focus on mitigating survey effects through the development of statistical tools. PMID:28189819

  8. Occupancy Modeling for Improved Accuracy and Understanding of Pathogen Prevalence and Dynamics

    PubMed Central

    Colvin, Michael E.; Peterson, James T.; Kent, Michael L.; Schreck, Carl B.

    2015-01-01

    Most pathogen detection tests are imperfect, with a sensitivity < 100%, thereby resulting in the potential for a false negative, where a pathogen is present but not detected. False negatives in a sample inflate the number of non-detections, negatively biasing estimates of pathogen prevalence. Histological examination of tissues as a diagnostic test can be advantageous as multiple pathogens can be examined and providing important information on associated pathological changes to the host. However, it is usually less sensitive than molecular or microbiological tests for specific pathogens. Our study objectives were to 1) develop a hierarchical occupancy model to examine pathogen prevalence in spring Chinook salmon Oncorhynchus tshawytscha and their distribution among host tissues 2) use the model to estimate pathogen-specific test sensitivities and infection rates, and 3) illustrate the effect of using replicate within host sampling on sample sizes required to detect a pathogen. We examined histological sections of replicate tissue samples from spring Chinook salmon O. tshawytscha collected after spawning for common pathogens seen in this population: Apophallus/echinostome metacercariae, Parvicapsula minibicornis, Nanophyetus salmincola/ metacercariae, and Renibacterium salmoninarum. A hierarchical occupancy model was developed to estimate pathogen and tissue-specific test sensitivities and unbiased estimation of host- and organ-level infection rates. Model estimated sensitivities and host- and organ-level infections rates varied among pathogens and model estimated infection rate was higher than prevalence unadjusted for test sensitivity, confirming that prevalence unadjusted for test sensitivity was negatively biased. The modeling approach provided an analytical approach for using hierarchically structured pathogen detection data from lower sensitivity diagnostic tests, such as histology, to obtain unbiased pathogen prevalence estimates with associated uncertainties. Accounting for test sensitivity using within host replicate samples also required fewer individual fish to be sampled. This approach is useful for evaluating pathogen or microbe community dynamics when test sensitivity is <100%. PMID:25738709

  9. Occupancy modeling for improved accuracy and understanding of pathogen prevalence and dynamics

    USGS Publications Warehouse

    Colvin, Michael E.; Peterson, James T.; Kent, Michael L.; Schreck, Carl B.

    2015-01-01

    Most pathogen detection tests are imperfect, with a sensitivity < 100%, thereby resulting in the potential for a false negative, where a pathogen is present but not detected. False negatives in a sample inflate the number of non-detections, negatively biasing estimates of pathogen prevalence. Histological examination of tissues as a diagnostic test can be advantageous as multiple pathogens can be examined and providing important information on associated pathological changes to the host. However, it is usually less sensitive than molecular or microbiological tests for specific pathogens. Our study objectives were to 1) develop a hierarchical occupancy model to examine pathogen prevalence in spring Chinook salmonOncorhynchus tshawytscha and their distribution among host tissues 2) use the model to estimate pathogen-specific test sensitivities and infection rates, and 3) illustrate the effect of using replicate within host sampling on sample sizes required to detect a pathogen. We examined histological sections of replicate tissue samples from spring Chinook salmon O. tshawytscha collected after spawning for common pathogens seen in this population:Apophallus/echinostome metacercariae, Parvicapsula minibicornis, Nanophyetus salmincola/metacercariae, and Renibacterium salmoninarum. A hierarchical occupancy model was developed to estimate pathogen and tissue-specific test sensitivities and unbiased estimation of host- and organ-level infection rates. Model estimated sensitivities and host- and organ-level infections rates varied among pathogens and model estimated infection rate was higher than prevalence unadjusted for test sensitivity, confirming that prevalence unadjusted for test sensitivity was negatively biased. The modeling approach provided an analytical approach for using hierarchically structured pathogen detection data from lower sensitivity diagnostic tests, such as histology, to obtain unbiased pathogen prevalence estimates with associated uncertainties. Accounting for test sensitivity using within host replicate samples also required fewer individual fish to be sampled. This approach is useful for evaluating pathogen or microbe community dynamics when test sensitivity is <100%.

  10. Utilization of Satellite Data in Land Surface Hydrology: Sensitivity and Assimilation

    NASA Technical Reports Server (NTRS)

    Lakshmi, Venkataraman; Susskind, Joel

    1999-01-01

    This paper investigates the sensitivity of potential evapotranspiration to input meteorological variables, viz- surface air temperature and surface vapor pressure. The sensitivity studies have been carried out for a wide range of land surface variables such as wind speed, leaf area index and surface temperatures. Errors in the surface air temperature and surface vapor pressure result in errors of different signs in the computed potential evapotranspiration. This result has implications for use of estimated values from satellite data or analysis of surface air temperature and surface vapor pressure in large scale hydrological modeling. The comparison of cumulative potential evapotranspiration estimates using ground observations and satellite observations over Manhattan, Kansas for a period of several months shows very little difference between the two. The cumulative differences between the ground based and satellite based estimates of potential evapotranspiration amounted to less that 20mm over a 18 month period and a percentage difference of 15%. The use of satellite estimates of surface skin temperature in hydrological modeling to update the soil moisture using a physical adjustment concept is studied in detail including the extent of changes in soil moisture resulting from the assimilation of surface skin temperature. The soil moisture of the surface layer is adjusted by 0.9mm over a 10 day period as a result of a 3K difference between the predicted and the observed surface temperature. This is a considerable amount given the fact that the top layer can hold only 5mm of water.

  11. Modeling Canadian Quality Control Test Program for Steroid Hormone Receptors in Breast Cancer: Diagnostic Accuracy Study.

    PubMed

    Pérez, Teresa; Makrestsov, Nikita; Garatt, John; Torlakovic, Emina; Gilks, C Blake; Mallett, Susan

    The Canadian Immunohistochemistry Quality Control program monitors clinical laboratory performance for estrogen receptor and progesterone receptor tests used in breast cancer treatment management in Canada. Current methods assess sensitivity and specificity at each time point, compared with a reference standard. We investigate alternative performance analysis methods to enhance the quality assessment. We used 3 methods of analysis: meta-analysis of sensitivity and specificity of each laboratory across all time points; sensitivity and specificity at each time point for each laboratory; and fitting models for repeated measurements to examine differences between laboratories adjusted by test and time point. Results show 88 laboratories participated in quality control at up to 13 time points using typically 37 to 54 histology samples. In meta-analysis across all time points no laboratories have sensitivity or specificity below 80%. Current methods, presenting sensitivity and specificity separately for each run, result in wide 95% confidence intervals, typically spanning 15% to 30%. Models of a single diagnostic outcome demonstrated that 82% to 100% of laboratories had no difference to reference standard for estrogen receptor and 75% to 100% for progesterone receptor, with the exception of 1 progesterone receptor run. Laboratories with significant differences to reference standard identified with Generalized Estimating Equation modeling also have reduced performance by meta-analysis across all time points. The Canadian Immunohistochemistry Quality Control program has a good design, and with this modeling approach has sufficient precision to measure performance at each time point and allow laboratories with a significantly lower performance to be targeted for advice.

  12. A Robust Approach to Risk Assessment Based on Species Sensitivity Distributions.

    PubMed

    Monti, Gianna S; Filzmoser, Peter; Deutsch, Roland C

    2018-05-03

    The guidelines for setting environmental quality standards are increasingly based on probabilistic risk assessment due to a growing general awareness of the need for probabilistic procedures. One of the commonly used tools in probabilistic risk assessment is the species sensitivity distribution (SSD), which represents the proportion of species affected belonging to a biological assemblage as a function of exposure to a specific toxicant. Our focus is on the inverse use of the SSD curve with the aim of estimating the concentration, HCp, of a toxic compound that is hazardous to p% of the biological community under study. Toward this end, we propose the use of robust statistical methods in order to take into account the presence of outliers or apparent skew in the data, which may occur without any ecological basis. A robust approach exploits the full neighborhood of a parametric model, enabling the analyst to account for the typical real-world deviations from ideal models. We examine two classic HCp estimation approaches and consider robust versions of these estimators. In addition, we also use data transformations in conjunction with robust estimation methods in case of heteroscedasticity. Different scenarios using real data sets as well as simulated data are presented in order to illustrate and compare the proposed approaches. These scenarios illustrate that the use of robust estimation methods enhances HCp estimation. © 2018 Society for Risk Analysis.

  13. Measuring exertion time, duty cycle and hand activity level for industrial tasks using computer vision.

    PubMed

    Akkas, Oguz; Lee, Cheng Hsien; Hu, Yu Hen; Harris Adamson, Carisa; Rempel, David; Radwin, Robert G

    2017-12-01

    Two computer vision algorithms were developed to automatically estimate exertion time, duty cycle (DC) and hand activity level (HAL) from videos of workers performing 50 industrial tasks. The average DC difference between manual frame-by-frame analysis and the computer vision DC was -5.8% for the Decision Tree (DT) algorithm, and 1.4% for the Feature Vector Training (FVT) algorithm. The average HAL difference was 0.5 for the DT algorithm and 0.3 for the FVT algorithm. A sensitivity analysis, conducted to examine the influence that deviations in DC have on HAL, found it remained unaffected when DC error was less than 5%. Thus, a DC error less than 10% will impact HAL less than 0.5 HAL, which is negligible. Automatic computer vision HAL estimates were therefore comparable to manual frame-by-frame estimates. Practitioner Summary: Computer vision was used to automatically estimate exertion time, duty cycle and hand activity level from videos of workers performing industrial tasks.

  14. Cost-Effectiveness Analysis of the Introduction of HPV Vaccination of 9-Year-Old-Girls in Iran.

    PubMed

    Yaghoubi, Mohsen; Nojomi, Marzieh; Vaezi, Atefeh; Erfani, Vida; Mahmoudi, Susan; Ezoji, Khadijeh; Zahraei, Seyed Mohsen; Chaudhri, Irtaza; Moradi-Lakeh, Maziar

    2018-04-23

    To estimate the cost effectiveness of introducing the quadrivalent human papillomavirus (HPV) vaccine into the national immunization program of Iran. The CERVIVAC cost-effectiveness model was used to calculate incremental cost per averted disability-adjusted life-year by vaccination compared with no vaccination from both governmental and societal perspectives. Calculations were based on epidemiologic parameters from the Iran National Cancer Registry and other national data sources as well as from literature review. We estimated all direct and indirect costs of cervical cancer treatment and vaccination program. All future costs and benefits were discounted at 3% per year and deterministic sensitivity analysis was used. During a 10-year period, HPV vaccination was estimated to avert 182 cervical cancer cases and 20 deaths at a total vaccination cost of US $23,459,897; total health service cost prevented because of HPV vaccination was estimated to be US $378,646 and US $691,741 from the governmental and societal perspective, respectively. Incremental cost per disability-adjusted life-year averted within 10 years was estimated to be US $15,205 and US $14,999 from the governmental and societal perspective, respectively, and both are higher than 3 times the gross domestic product per capita of Iran (US $14,289). Sensitivity analysis showed variation in vaccine price, and the number of doses has the greatest volatility on the incremental cost-effectiveness ratio. Using a two-dose vaccination program could be cost-effective from the societal perspective (incremental cost-effectiveness ratio = US $11,849). Introducing a three-dose HPV vaccination program is currently not cost-effective in Iran. Because vaccine supplies cost is the most important parameter in this evaluation, considering a two-dose schedule or reducing vaccine prices has an impact on final conclusions. Copyright © 2018. Published by Elsevier Inc.

  15. The cost effectiveness of a quality improvement program to reduce maternal and fetal mortality in a regional referral hospital in Accra, Ghana.

    PubMed

    Goodman, David M; Ramaswamy, Rohit; Jeuland, Marc; Srofenyoh, Emmanuel K; Engmann, Cyril M; Olufolabi, Adeyemi J; Owen, Medge D

    2017-01-01

    To evaluate the cost-effectiveness of a quality improvement intervention aimed at reducing maternal and fetal mortality in Accra, Ghana. Quasi-experimental, time-sequence intervention, retrospective cost-effectiveness analysis. Data were collected on the cost and outcomes of a 5-year Kybele-Ghana Health Service Quality Improvement (QI) intervention conducted at Ridge Regional Hospital, a tertiary referral center in Accra, Ghana, focused on systems, personnel, and communication. Maternal deaths prevented were estimated comparing observed rates with counterfactual projections of maternal mortality and case-fatality rates for hypertensive disorders of pregnancy and obstetric hemorrhage. Stillbirths prevented were estimated based on counterfactual estimates of stillbirth rates. Cost-effectiveness was then calculated using estimated disability-adjusted life years averted and subjected to Monte Carlo and one-way sensitivity analyses to test the importance of assumptions inherent in the calculations. Incremental Cost-effectiveness ratio (ICER), which represents the cost per disability-adjusted life-year (DALY) averted by the intervention compared to a model counterfactual. From 2007-2011, 39,234 deliveries were affected by the QI intervention implemented at Ridge Regional Hospital. The total budget for the program was $2,363,100. Based on program estimates, 236 (±5) maternal deaths and 129 (±13) intrapartum stillbirths were averted (14,876 DALYs), implying an ICER of $158 ($129-$195) USD. This value is well below the highly cost-effective threshold of $1268 USD. Sensitivity analysis considered DALY calculation methods, and yearly prevalence of risk factors and case fatality rates. In each of these analyses, the program remained highly cost-effective with an ICER ranging from $97-$218. QI interventions to reduce maternal and fetal mortality in low resource settings can be highly cost effective. Cost-effectiveness analysis is feasible and should regularly be conducted to encourage fiscal responsibility in the pursuit of improved maternal and child health.

  16. The population benefit of radiotherapy for gynaecological cancer: Local control and survival estimates.

    PubMed

    Hanna, Timothy P; Delaney, Geoffrey P; Barton, Michael B

    2016-09-01

    The population benefit of radiotherapy for gynaecological cancer (GC) if evidence-based guidelines were routinely followed is not known. This study's aim was to address this. Decision trees were utilised to estimate benefit. Radiotherapy alone (RT) benefit was the absolute proportional benefit of radiotherapy over no radiotherapy for radical indications, and over surgery alone for adjuvant indications. Chemoradiotherapy (CRT) benefit was the absolute incremental benefit of concurrent chemotherapy and RT over RT alone. Citation databases were systematically queried for the highest level of evidence defining 5-year Local Control (LC), and 2-year and 5-year Overall Survival (OS) benefit. Meta-analysis was performed if there were multiple sources of the same evidence level. Deterministic and probabilistic sensitivity analysis was performed. Guidelines supported 22 radiotherapy indications, of which 8 were for CRT. 21% of all GC had an adjuvant or curative radiotherapy indication. The absolute estimated population-based 5-year LC and OS benefits of RT, if all patients were treated according to guidelines, were: endometrial cancer LC 5.7% (95% CI (3.5%,8.2%)), OS 2.3% (1.2%,3.4%), ovarian cancer (nil), vulval cancer LC 10.0% (1.6%,18.2%), OS 8.5% (0.5%,15.9%). Combined with prior estimates for cervical cancer, RT benefits for all GC were LC 9.0% (7.8%,10.3%), OS 4.6% (3.8%,5.4%). The incremental benefit of CRT for all GC was LC 0.7% (0.4%,0.9%), OS 0.5% (0.2%,0.8%). Benefits were distinct from the contribution of other modalities. The model was robust in sensitivity analysis. Most radiotherapy benefit was irreplaceable by other modalities. Radiotherapy provides important and irreplaceable LC and OS benefits for GC when optimally utilised. The population model provided a robust means for estimating this benefit. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. Using Observational Data to Estimate the Effect of Hand Washing and Clean Delivery Kit Use by Birth Attendants on Maternal Deaths after Home Deliveries in Rural Bangladesh, India and Nepal

    PubMed Central

    Seward, Nadine; Prost, Audrey; Copas, Andrew; Corbin, Marine; Li, Leah; Colbourn, Tim; Osrin, David; Neuman, Melissa; Azad, Kishwar; Kuddus, Abdul; Nair, Nirmala; Tripathy, Prasanta; Manandhar, Dharma; Costello, Anthony; Cortina-Borja, Mario

    2015-01-01

    Background Globally, puerperal sepsis accounts for an estimated 8–12% of maternal deaths, but evidence is lacking on the extent to which clean delivery practices could improve maternal survival. We used data from the control arms of four cluster-randomised controlled trials conducted in rural India, Bangladesh and Nepal, to examine associations between clean delivery kit use and hand washing by the birth attendant with maternal mortality among home deliveries. Methods We tested associations between clean delivery practices and maternal deaths, using a pooled dataset for 40,602 home births across sites in the three countries. Cross-sectional data were analysed by fitting logistic regression models with and without multiple imputation, and confounders were selected a priori using causal directed acyclic graphs. The robustness of estimates was investigated through sensitivity analyses. Results Hand washing was associated with a 49% reduction in the odds of maternal mortality after adjusting for confounding factors (adjusted odds ratio (AOR) 0.51, 95% CI 0.28–0.93). The sensitivity analysis testing the missing at random assumption for the multiple imputation, as well as the sensitivity analysis accounting for possible misclassification bias in the use of clean delivery practices, indicated that the association between hand washing and maternal death had been over estimated. Clean delivery kit use was not associated with a maternal death (AOR 1.26, 95% CI 0.62–2.56). Conclusions Our evidence suggests that hand washing in delivery is critical for maternal survival among home deliveries in rural South Asia, although the exact magnitude of this effect is uncertain due to inherent biases associated with observational data from low resource settings. Our findings indicating kit use does not improve maternal survival, suggests that the soap is not being used in all instances that kit use is being reported. PMID:26295838

  18. Initial evaluation of rectal bleeding in young persons: a cost-effectiveness analysis.

    PubMed

    Lewis, James D; Brown, Alphonso; Localio, A Russell; Schwartz, J Sanford

    2002-01-15

    Evaluation of rectal bleeding in young patients is a frequent diagnostic challenge. To determine the relative cost-effectiveness of alternative diagnostic strategies for young patients with rectal bleeding. Cost-effectiveness analysis using a Markov model. Probability estimates were based on published medical literature. Cost estimates were based on Medicare reimbursement rates and published medical literature. Persons 25 to 45 years of age with otherwise asymptomatic rectal bleeding. The patient's lifetime. Modified societal perspective. Diagnostic strategies included no evaluation, colonoscopy, flexible sigmoidoscopy, barium enema, anoscopy, or any feasible combination of these procedures. Life expectancy and costs. For 35-year-old patients, the no-evaluation strategy yielded the least life expectancy. The incremental cost-effectiveness of flexible sigmoidoscopy compared with no evaluation or with any strategy incorporating anoscopy (followed by further evaluation if no anal disease was found on anoscopy) was less than $5300 per year of life gained. A strategy of flexible sigmoidoscopy plus barium enema yielded the greatest life expectancy, with an incremental cost of $23 918 per additional life-year gained compared with flexible sigmoidoscopy alone. As patient age at presentation of rectal bleeding increased, evaluation of the entire colon became more cost-effective. The incremental cost-effectiveness of flexible sigmoidoscopy plus barium enema compared with colonoscopy was sensitive to estimates of the sensitivity of the tests. In a probabilistic sensitivity analysis comparing flexible sigmoidoscopy with anoscopy followed by flexible sigmoidoscopy if needed, the middle 95th percentile of the distribution of the incremental cost-effectiveness ratios ranged from flexible sigmoidoscopy yielding an increased life expectancy at reduced cost to $52 158 per year of life gained (mean, $11 461 per year of life saved). Evaluation of the colon of persons 25 to 45 years of age with otherwise asymptomatic rectal bleeding increases the life expectancy at a cost comparable to that of colon cancer screening.

  19. Laboratory Workflow Analysis of Culture of Periprosthetic Tissues in Blood Culture Bottles.

    PubMed

    Peel, Trisha N; Sedarski, John A; Dylla, Brenda L; Shannon, Samantha K; Amirahmadi, Fazlollaah; Hughes, John G; Cheng, Allen C; Patel, Robin

    2017-09-01

    Culture of periprosthetic tissue specimens in blood culture bottles is more sensitive than conventional techniques, but the impact on laboratory workflow has yet to be addressed. Herein, we examined the impact of culture of periprosthetic tissues in blood culture bottles on laboratory workflow and cost. The workflow was process mapped, decision tree models were constructed using probabilities of positive and negative cultures drawn from our published study (T. N. Peel, B. L. Dylla, J. G. Hughes, D. T. Lynch, K. E. Greenwood-Quaintance, A. C. Cheng, J. N. Mandrekar, and R. Patel, mBio 7:e01776-15, 2016, https://doi.org/10.1128/mBio.01776-15), and the processing times and resource costs from the laboratory staff time viewpoint were used to compare periprosthetic tissues culture processes using conventional techniques with culture in blood culture bottles. Sensitivity analysis was performed using various rates of positive cultures. Annualized labor savings were estimated based on salary costs from the U.S. Labor Bureau for Laboratory staff. The model demonstrated a 60.1% reduction in mean total staff time with the adoption of tissue inoculation into blood culture bottles compared to conventional techniques (mean ± standard deviation, 30.7 ± 27.6 versus 77.0 ± 35.3 h per month, respectively; P < 0.001). The estimated annualized labor cost savings of culture using blood culture bottles was $10,876.83 (±$337.16). Sensitivity analysis was performed using various rates of culture positivity (5 to 50%). Culture in blood culture bottles was cost-effective, based on the estimated labor cost savings of $2,132.71 for each percent increase in test accuracy. In conclusion, culture of periprosthetic tissue in blood culture bottles is not only more accurate than but is also cost-saving compared to conventional culture methods. Copyright © 2017 American Society for Microbiology.

  20. Diagnostic accuracy of nucleic acid amplification based assays for tuberculous meningitis: A meta-analysis.

    PubMed

    Gupta, Renu; Talwar, Puneet; Talwar, Pumanshi; Khurana, Sarbjeet; Kushwaha, Suman; Jalan, Nupur; Thakur, Rajeev

    2018-05-25

    Numerous in-house and commercial nucleic acid amplification tests (NAAT) have been evaluated using variable reference standards for diagnosis of TBM but their diagnostic potential is still not very clear. We conducted a meta-analysis to assess the diagnostic accuracy of different NAAT based assays for diagnosing TBM against 43 data sets of confirmed TBM (n = 1066) and 61 data sets of suspected TBM (n = 3721) as two reference standards. The summary estimate of the sensitivity and the specificity were obtained using the bivariate model. QUADAS-2 tool was used to perform the Quality assessment for bias and applicability. Publication bias was assessed with Deeks' funnel plot. Studies with confirmed TBM had better summary estimates as compared to studies with clinically suspected TBM irrespective of NAAT and index tests used. Among in-house assays, MPB as the gene target had best summary estimates in both confirmed [sensitivity:90%(83-95), specificity:97-%(87-99), DOR:247 (50-1221), AUC:99%(97-100), PLR:38.8-(6.6-133), NLR:0.11(0.05-0.18), I 2   = 15%] and clinically suspected [sensitivity:69%(47-85), specificity:96%(90-98), DOR:62(16.8-232), AUC:94%(92-97), PLR:16.9(6.5-36.8), NLR:0.33(0.16-0.56), I 2 :15.3%] groups. GeneXpert revealed good diagnostic accuracy only in confirmed TBM group [sensitivity = 57%(38-74), specificity = 98%(89-100), DOR = 62(7-589), AUC = 87%(79-96), PLR = 33.2(3.8-128), NLR = 0.45(0.26-0.68), I 2   = 0%]. This meta-analysis identified potential role of MPB gene among in-house assays and GeneXpert as commercial assay for diagnosing TBM. Copyright © 2018. Published by Elsevier Ltd.

Top