Science.gov

Sample records for global sensitivity analysis

  1. Global sensitivity analysis of groundwater transport

    NASA Astrophysics Data System (ADS)

    Cvetkovic, V.; Soltani, S.; Vigouroux, G.

    2015-12-01

    In this work we address the model and parametric sensitivity of groundwater transport using the Lagrangian-Stochastic Advection-Reaction (LaSAR) methodology. The 'attenuation index' is used as a relevant and convenient measure of the coupled transport mechanisms. The coefficients of variation (CV) for seven uncertain parameters are assumed to be between 0.25 and 3.5, the highest value being for the lower bound of the mass transfer coefficient k0 . In almost all cases, the uncertainties in the macro-dispersion (CV = 0.35) and in the mass transfer rate k0 (CV = 3.5) are most significant. The global sensitivity analysis using Sobol and derivative-based indices yield consistent rankings on the significance of different models and/or parameter ranges. The results presented here are generic however the proposed methodology can be easily adapted to specific conditions where uncertainty ranges in models and/or parameters can be estimated from field and/or laboratory measurements.

  2. Global sensitivity analysis in wind energy assessment

    NASA Astrophysics Data System (ADS)

    Tsvetkova, O.; Ouarda, T. B.

    2012-12-01

    Wind energy is one of the most promising renewable energy sources. Nevertheless, it is not yet a common source of energy, although there is enough wind potential to supply world's energy demand. One of the most prominent obstacles on the way of employing wind energy is the uncertainty associated with wind energy assessment. Global sensitivity analysis (SA) studies how the variation of input parameters in an abstract model effects the variation of the variable of interest or the output variable. It also provides ways to calculate explicit measures of importance of input variables (first order and total effect sensitivity indices) in regard to influence on the variation of the output variable. Two methods of determining the above mentioned indices were applied and compared: the brute force method and the best practice estimation procedure In this study a methodology for conducting global SA of wind energy assessment at a planning stage is proposed. Three sampling strategies which are a part of SA procedure were compared: sampling based on Sobol' sequences (SBSS), Latin hypercube sampling (LHS) and pseudo-random sampling (PRS). A case study of Masdar City, a showcase of sustainable living in the UAE, is used to exemplify application of the proposed methodology. Sources of uncertainty in wind energy assessment are very diverse. In the case study the following were identified as uncertain input parameters: the Weibull shape parameter, the Weibull scale parameter, availability of a wind turbine, lifetime of a turbine, air density, electrical losses, blade losses, ineffective time losses. Ineffective time losses are defined as losses during the time when the actual wind speed is lower than the cut-in speed or higher than the cut-out speed. The output variable in the case study is the lifetime energy production. Most influential factors for lifetime energy production are identified with the ranking of the total effect sensitivity indices. The results of the present

  3. Multitarget global sensitivity analysis of n-butanol combustion.

    PubMed

    Zhou, Dingyu D Y; Davis, Michael J; Skodje, Rex T

    2013-05-01

    A model for the combustion of butanol is studied using a recently developed theoretical method for the systematic improvement of the kinetic mechanism. The butanol mechanism includes 1446 reactions, and we demonstrate that it is straightforward and computationally feasible to implement a full global sensitivity analysis incorporating all the reactions. In addition, we extend our previous analysis of ignition-delay targets to include species targets. The combination of species and ignition targets leads to multitarget global sensitivity analysis, which allows for a more complete mechanism validation procedure than we previously implemented. The inclusion of species sensitivity analysis allows for a direct comparison between reaction pathway analysis and global sensitivity analysis. PMID:23530815

  4. Global and Local Sensitivity Analysis Methods for a Physical System

    ERIC Educational Resources Information Center

    Morio, Jerome

    2011-01-01

    Sensitivity analysis is the study of how the different input variations of a mathematical model influence the variability of its output. In this paper, we review the principle of global and local sensitivity analyses of a complex black-box system. A simulated case of application is given at the end of this paper to compare both approaches.…

  5. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    NASA Astrophysics Data System (ADS)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  6. Towards More Efficient and Effective Global Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin

    2014-05-01

    Sensitivity analysis (SA) is an important paradigm in the context of model development and application. There are a variety of approaches towards sensitivity analysis that formally describe different "intuitive" understandings of the sensitivity of a single or multiple model responses to different factors such as model parameters or forcings. These approaches are based on different philosophies and theoretical definitions of sensitivity and range from simple local derivatives to rigorous Sobol-type analysis-of-variance approaches. In general, different SA methods focus and identify different properties of the model response and may lead to different, sometimes even conflicting conclusions about the underlying sensitivities. This presentation revisits the theoretical basis for sensitivity analysis, critically evaluates the existing approaches in the literature, and demonstrates their shortcomings through simple examples. Important properties of response surfaces that are associated with the understanding and interpretation of sensitivities are outlined. A new approach towards global sensitivity analysis is developed that attempts to encompass the important, sensitivity-related properties of response surfaces. Preliminary results show that the new approach is superior to the standard approaches in the literature in terms of effectiveness and efficiency.

  7. Optimizing human activity patterns using global sensitivity analysis

    PubMed Central

    Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.

    2014-01-01

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations. PMID:25580080

  8. Optimizing human activity patterns using global sensitivity analysis

    SciTech Connect

    Fairchild, Geoffrey; Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.

    2013-12-10

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. Here we use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Finally, though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.

  9. Optimizing human activity patterns using global sensitivity analysis

    DOE PAGESBeta

    Fairchild, Geoffrey; Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.

    2013-12-10

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimizationmore » problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. Here we use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Finally, though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.« less

  10. A Global Sensitivity Analysis Methodology for Multi-physics Applications

    SciTech Connect

    Tong, C H; Graziani, F R

    2007-02-02

    Experiments are conducted to draw inferences about an entire ensemble based on a selected number of observations. This applies to both physical experiments as well as computer experiments, the latter of which are performed by running the simulation models at different input configurations and analyzing the output responses. Computer experiments are instrumental in enabling model analyses such as uncertainty quantification and sensitivity analysis. This report focuses on a global sensitivity analysis methodology that relies on a divide-and-conquer strategy and uses intelligent computer experiments. The objective is to assess qualitatively and/or quantitatively how the variabilities of simulation output responses can be accounted for by input variabilities. We address global sensitivity analysis in three aspects: methodology, sampling/analysis strategies, and an implementation framework. The methodology consists of three major steps: (1) construct credible input ranges; (2) perform a parameter screening study; and (3) perform a quantitative sensitivity analysis on a reduced set of parameters. Once identified, research effort should be directed to the most sensitive parameters to reduce their uncertainty bounds. This process is repeated with tightened uncertainty bounds for the sensitive parameters until the output uncertainties become acceptable. To accommodate the needs of multi-physics application, this methodology should be recursively applied to individual physics modules. The methodology is also distinguished by an efficient technique for computing parameter interactions. Details for each step will be given using simple examples. Numerical results on large scale multi-physics applications will be available in another report. Computational techniques targeted for this methodology have been implemented in a software package called PSUADE.

  11. A comparison of two sampling methods for global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Tarantola, Stefano; Becker, William; Zeitz, Dirk

    2012-05-01

    We compare the convergence properties of two different quasi-random sampling designs - Sobol's quasi-Monte Carlo, and Latin supercube sampling in variance-based global sensitivity analysis. We use the non-monotonic V-function of Sobol' as base case-study, and compare the performance of both sampling strategies at increasing sample size and dimensionality against analytical values. The results indicate that in almost all cases investigated here, the Sobol' design performs better. This, coupled with the fact that effective Latin supercube sampling requires a priori knowledge of the interaction properties of the function, leads us to recommend Sobol' sampling in most practical cases.

  12. A global sensitivity analysis of crop virtual water content

    NASA Astrophysics Data System (ADS)

    Tamea, S.; Tuninetti, M.; D'Odorico, P.; Laio, F.; Ridolfi, L.

    2015-12-01

    The concepts of virtual water and water footprint are becoming widely used in the scientific literature and they are proving their usefulness in a number of multidisciplinary contexts. With such growing interest a measure of data reliability (and uncertainty) is becoming pressing but, as of today, assessments of data sensitivity to model parameters, performed at the global scale, are not known. This contribution aims at filling this gap. Starting point of this study is the evaluation of the green and blue virtual water content (VWC) of four staple crops (i.e. wheat, rice, maize, and soybean) at a global high resolution scale. In each grid cell, the crop VWC is given by the ratio between the total crop evapotranspiration over the growing season and the crop actual yield, where evapotranspiration is determined with a detailed daily soil water balance and actual yield is estimated using country-based data, adjusted to account for spatial variability. The model provides estimates of the VWC at a 5x5 arc minutes and it improves on previous works by using the newest available data and including multi-cropping practices in the evaluation. The model is then used as the basis for a sensitivity analysis, in order to evaluate the role of model parameters in affecting the VWC and to understand how uncertainties in input data propagate and impact the VWC accounting. In each cell, small changes are exerted to one parameter at a time, and a sensitivity index is determined as the ratio between the relative change of VWC and the relative change of the input parameter with respect to its reference value. At the global scale, VWC is found to be most sensitive to the planting date, with a positive (direct) or negative (inverse) sensitivity index depending on the typical season of crop planting date. VWC is also markedly dependent on the length of the growing period, with an increase in length always producing an increase of VWC, but with higher spatial variability for rice than for

  13. Global sensitivity analysis for DSMC simulations of hypersonic shocks

    NASA Astrophysics Data System (ADS)

    Strand, James S.; Goldstein, David B.

    2013-08-01

    Two global, Monte Carlo based sensitivity analyses were performed to determine which reaction rates most affect the results of Direct Simulation Monte Carlo (DSMC) simulations for a hypersonic shock in five-species air. The DSMC code was written and optimized with shock tube simulations in mind, and includes modifications to allow for the efficient simulation of a 1D hypersonic shock. The TCE model is used to convert Arrhenius-form reaction rate constants into reaction cross-sections, after modification to allow accurate modeling of reactions with arbitrarily large rates relative to the VHS collision rate. The square of the Pearson correlation coefficient was used as the measure for sensitivity in the first of the analyses, and the mutual information was used as the measure in the second. The quantity of interest (QoI) for these analyses was the NO density profile across a 1D shock at ˜8000 m/s (M∞ ≈ 23). This vector QoI was broken into a set of scalar QoIs, each representing the density of NO at a specific point downstream of the shock, and sensitivities were calculated for each scalar QoI based on both measures of sensitivity. Profiles of sensitivity vs. location downstream of the shock were then integrated to determine an overall sensitivity for each reaction. A weighting function was used in the integration in order to emphasize sensitivities in the region of greatest thermal and chemical non-equilibrium. Both sensitivity analysis methods agree on the six reactions which most strongly affect the density of NO. These six reactions are the N2 dissociation reaction N2 + N ⇄ 3N, the O2 dissociation reaction O2 + O ⇄ 3O, the NO dissociation reactions NO + N ⇄ 2N + O and NO + O ⇄ N + 2O, and the exchange reactions N2 + O ⇄ NO + N and NO + O ⇄ O2 + N. This analysis lays the groundwork for the application of Bayesian statistical methods for the calibration of parameters relevant to modeling a hypersonic shock layer with the DSMC method.

  14. Global sensitivity analysis of the XUV-ABLATOR code

    NASA Astrophysics Data System (ADS)

    Nevrlý, Václav; Janku, Jaroslav; Dlabka, Jakub; Vašinek, Michal; Juha, Libor; Vyšín, Luděk.; Burian, Tomáš; Lančok, Ján.; Skřínský, Jan; Zelinger, Zdeněk.; Pira, Petr; Wild, Jan

    2013-05-01

    Availability of numerical model providing reliable estimation of the parameters of ablation processes induced by extreme ultraviolet laser pulses in the range of nanosecond and sub-picosecond timescales is highly desirable for recent experimental research as well as for practical purposes. Performance of the one-dimensional thermodynamic code (XUV-ABLATOR) in predicting the relationship of ablation rate and laser fluence is investigated for three reference materials: (i) silicon, (ii) fused silica and (iii) polymethyl methacrylate. The effect of pulse duration and different material properties on the model predictions is studied in the frame of this contribution for the conditions typical for two compact laser systems operating at 46.9 nm. Software implementation of the XUV-ABLATOR code including graphical user's interface and the set of tools for sensitivity analysis was developed. Global sensitivity analysis using high dimensional model representation in combination with quasi-random sampling was applied in order to identify the most critical input data as well as to explore the uncertainty range of model results.

  15. Variability-based global sensitivity analysis of circuit response

    NASA Astrophysics Data System (ADS)

    Opalski, Leszek J.

    2014-11-01

    The research problem of interest to this paper is: how to determine efficiently and objectively the most and the least influential parameters of a multimodule electronic system - given the system model f and the module parameter variation ranges. The author investigates if existing generic global sensitivity methods are applicable for electronic circuit design, even if they were developed (and successfully applied) in quite distant engineering areas. A photodiode detector analog front-end system response time is used to reveal capability of the selected global sensitivity approaches under study.

  16. Global sensitivity analysis of the radiative transfer model

    NASA Astrophysics Data System (ADS)

    Neelam, Maheshwari; Mohanty, Binayak P.

    2015-04-01

    With the recently launched Soil Moisture Active Passive (SMAP) mission, it is very important to have a complete understanding of the radiative transfer model for better soil moisture retrievals and to direct future research and field campaigns in areas of necessity. Because natural systems show great variability and complexity with respect to soil, land cover, topography, precipitation, there exist large uncertainties and heterogeneities in model input factors. In this paper, we explore the possibility of using global sensitivity analysis (GSA) technique to study the influence of heterogeneity and uncertainties in model inputs on zero order radiative transfer (ZRT) model and to quantify interactions between parameters. GSA technique is based on decomposition of variance and can handle nonlinear and nonmonotonic functions. We direct our analyses toward growing agricultural fields of corn and soybean in two different regions, Iowa, USA (SMEX02) and Winnipeg, Canada (SMAPVEX12). We noticed that, there exists a spatio-temporal variation in parameter interactions under different soil moisture and vegetation conditions. Radiative Transfer Model (RTM) behaves more non-linearly in SMEX02 and linearly in SMAPVEX12, with average parameter interactions of 14% in SMEX02 and 5% in SMAPVEX12. Also, parameter interactions increased with vegetation water content (VWC) and roughness conditions. Interestingly, soil moisture shows an exponentially decreasing sensitivity function whereas parameters such as root mean square height (RMS height) and vegetation water content show increasing sensitivity with 0.05 v/v increase in soil moisture range. Overall, considering the SMAPVEX12 fields to be water rich environment (due to higher observed SM) and SMEX02 fields to be energy rich environment (due to lower SM and wide ranges of TSURF), our results indicate that first order as well as interactions between the parameters change with water and energy rich environments.

  17. Global sensitivity analysis of the Indian monsoon during the Pleistocene

    NASA Astrophysics Data System (ADS)

    Araya-Melo, P. A.; Crucifix, M.; Bounceur, N.

    2015-01-01

    The sensitivity of the Indian monsoon to the full spectrum of climatic conditions experienced during the Pleistocene is estimated using the climate model HadCM3. The methodology follows a global sensitivity analysis based on the emulator approach of Oakley and O'Hagan (2004) implemented following a three-step strategy: (1) development of an experiment plan, designed to efficiently sample a five-dimensional input space spanning Pleistocene astronomical configurations (three parameters), CO2 concentration and a Northern Hemisphere glaciation index; (2) development, calibration and validation of an emulator of HadCM3 in order to estimate the response of the Indian monsoon over the full input space spanned by the experiment design; and (3) estimation and interpreting of sensitivity diagnostics, including sensitivity measures, in order to synthesise the relative importance of input factors on monsoon dynamics, estimate the phase of the monsoon intensity response with respect to that of insolation, and detect potential non-linear phenomena. By focusing on surface temperature, precipitation, mixed-layer depth and sea-surface temperature over the monsoon region during the summer season (June-July-August-September), we show that precession controls the response of four variables: continental temperature in phase with June to July insolation, high glaciation favouring a late-phase response, sea-surface temperature in phase with May insolation, continental precipitation in phase with July insolation, and mixed-layer depth in antiphase with the latter. CO2 variations control temperature variance with an amplitude similar to that of precession. The effect of glaciation is dominated by the albedo forcing, and its effect on precipitation competes with that of precession. Obliquity is a secondary effect, negligible on most variables except sea-surface temperature. It is also shown that orography forcing reduces the glacial cooling, and even has a positive effect on precipitation

  18. Simulation of the global contrail radiative forcing: A sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Yi, Bingqi; Yang, Ping; Liou, Kuo-Nan; Minnis, Patrick; Penner, Joyce E.

    2012-12-01

    The contrail radiative forcing induced by human aviation activity is one of the most uncertain contributions to climate forcing. An accurate estimation of global contrail radiative forcing is imperative, and the modeling approach is an effective and prominent method to investigate the sensitivity of contrail forcing to various potential factors. We use a simple offline model framework that is particularly useful for sensitivity studies. The most-up-to-date Community Atmospheric Model version 5 (CAM5) is employed to simulate the atmosphere and cloud conditions during the year 2006. With updated natural cirrus and additional contrail optical property parameterizations, the RRTMG Model (RRTM-GCM application) is used to simulate the global contrail radiative forcing. Global contrail coverage and optical depth derived from the literature for the year 2002 is used. The 2006 global annual averaged contrail net (shortwave + longwave) radiative forcing is estimated to be 11.3 mW m-2. Regional contrail radiative forcing over dense air traffic areas can be more than ten times stronger than the global average. A series of sensitivity tests are implemented and show that contrail particle effective size, contrail layer height, the model cloud overlap assumption, and contrail optical properties are among the most important factors. The difference between the contrail forcing under all and clear skies is also shown.

  19. Global sensitivity analysis in control-augmented structural synthesis

    NASA Technical Reports Server (NTRS)

    Bloebaum, Christina L.

    1989-01-01

    In this paper, an integrated approach to structural/control design is proposed in which variables in both the passive (structural) and active (control) disciplines of an optimization process are changed simultaneously. The global sensitivity equation (GSE) method of Sobieszczanski-Sobieski (1988) is used to obtain the behavior sensitivity derivatives necessary for the linear approximations used in the parallel multidisciplinary synthesis problem. The GSE allows for the decoupling of large systems into smaller subsystems and thus makes it possible to determine the local sensitivities of each subsystem's outputs to its inputs and parameters. The advantages in using the GSE method are demonstrated using a finite-element representation of a truss structure equipped with active lateral displacement controllers, which is undergoing forced vibration.

  20. How to assess the Efficiency and "Uncertainty" of Global Sensitivity Analysis?

    NASA Astrophysics Data System (ADS)

    Haghnegahdar, Amin; Razavi, Saman

    2016-04-01

    Sensitivity analysis (SA) is an important paradigm for understanding model behavior, characterizing uncertainty, improving model calibration, etc. Conventional "global" SA (GSA) approaches are rooted in different philosophies, resulting in different and sometime conflicting and/or counter-intuitive assessment of sensitivity. Moreover, most global sensitivity techniques are highly computationally demanding to be able to generate robust and stable sensitivity metrics over the entire model response surface. Accordingly, a novel sensitivity analysis method called Variogram Analysis of Response Surfaces (VARS) is introduced to overcome the aforementioned issues. VARS uses the Variogram concept to efficiently provide a comprehensive assessment of global sensitivity across a range of scales within the parameter space. Based on the VARS principles, in this study we present innovative ideas to assess (1) the efficiency of GSA algorithms and (2) the level of confidence we can assign to a sensitivity assessment. We use multiple hydrological models with different levels of complexity to explain the new ideas.

  1. A global sensitivity analysis for African sleeping sickness

    PubMed Central

    DAVIS, STEPHEN; AKSOY, SERAP; GALVANI, ALISON

    2012-01-01

    SUMMARY African sleeping sickness is a parasitic disease transmitted through the bites of tsetse flies of the genus Glossina. We constructed mechanistic models for the basic reproduction number, R0, of Trypanosoma brucei gambiense and Trypanosoma brucei rhodesiense, respectively the causative agents of West and East African human sleeping sickness. We present global sensitivity analyses of these models that rank the importance of the biological parameters that may explain variation in R0, using parameter ranges based on literature, field data and expertize out of Uganda. For West African sleeping sickness, our results indicate that the proportion of bloodmeals taken from humans by Glossina fuscipes fuscipes is the most important factor, suggesting that differences in the exposure of humans to tsetse are fundamental to the distribution of T. b. gambiense. The second ranked parameter for T. b. gambiense and the highest ranked for T. b. rhodesiense was the proportion of Glossina refractory to infection. This finding underlines the possible implications of recent work showing that nutritionally stressed tsetse are more susceptible to trypanosome infection, and provides broad support for control strategies in development that are aimed at increasing refractoriness in tsetse flies. We note though that for T. b. rhodesiense the population parameters for tsetse – species composition, survival and abundance – were ranked almost as highly as the proportion refractory, and that the model assumed regular treatment of livestock with trypanocides as an established practice in the areas of Uganda experiencing East African sleeping sickness. PMID:21078220

  2. Global sensitivity analysis of analytical vibroacoustic transmission models

    NASA Astrophysics Data System (ADS)

    Christen, Jean-Loup; Ichchou, Mohamed; Troclet, Bernard; Bareille, Olivier; Ouisse, Morvan

    2016-04-01

    Noise reduction issues arise in many engineering problems. One typical vibroacoustic problem is the transmission loss (TL) optimisation and control. The TL depends mainly on the mechanical parameters of the considered media. At early stages of the design, such parameters are not well known. Decision making tools are therefore needed to tackle this issue. In this paper, we consider the use of the Fourier Amplitude Sensitivity Test (FAST) for the analysis of the impact of mechanical parameters on features of interest. FAST is implemented with several structural configurations. FAST method is used to estimate the relative influence of the model parameters while assuming some uncertainty or variability on their values. The method offers a way to synthesize the results of a multiparametric analysis with large variability. Results are presented for transmission loss of isotropic, orthotropic and sandwich plates excited by a diffuse field on one side. Qualitative trends found to agree with the physical expectation. Design rules can then be set up for vibroacoustic indicators. The case of a sandwich plate is taken as an example of the use of this method inside an optimisation process and for uncertainty quantification.

  3. A new variance-based global sensitivity analysis technique

    NASA Astrophysics Data System (ADS)

    Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen

    2013-11-01

    A new set of variance-based sensitivity indices, called W-indices, is proposed. Similar to the Sobol's indices, both main and total effect indices are defined. The W-main effect indices measure the average reduction of model output variance when the ranges of a set of inputs are reduced, and the total effect indices quantify the average residual variance when the ranges of the remaining inputs are reduced. Geometrical interpretations show that the W-indices gather the full information of the variance ratio function, whereas, Sobol's indices only reflect the marginal information. Then the double-loop-repeated-set Monte Carlo (MC) (denoted as DLRS MC) procedure, the double-loop-single-set MC (denoted as DLSS MC) procedure and the model emulation procedure are introduced for estimating the W-indices. It is shown that the DLRS MC procedure is suitable for computing all the W-indices despite its highly computational cost. The DLSS MC procedure is computationally efficient, however, it is only applicable for computing low order indices. The model emulation is able to estimate all the W-indices with low computational cost as long as the model behavior is correctly captured by the emulator. The Ishigami function, a modified Sobol's function and two engineering models are utilized for comparing the W- and Sobol's indices and verifying the efficiency and convergence of the three numerical methods. Results show that, for even an additive model, the W-total effect index of one input may be significantly larger than its W-main effect index. This indicates that there may exist interaction effects among the inputs of an additive model when their distribution ranges are reduced.

  4. Uncertainty analysis and global sensitivity analysis of techno-economic assessments for biodiesel production.

    PubMed

    Tang, Zhang-Chun; Zhenzhou, Lu; Zhiwen, Liu; Ningcong, Xiao

    2015-01-01

    There are various uncertain parameters in the techno-economic assessments (TEAs) of biodiesel production, including capital cost, interest rate, feedstock price, maintenance rate, biodiesel conversion efficiency, glycerol price and operating cost. However, fewer studies focus on the influence of these parameters on TEAs. This paper investigated the effects of these parameters on the life cycle cost (LCC) and the unit cost (UC) in the TEAs of biodiesel production. The results show that LCC and UC exhibit variations when involving uncertain parameters. Based on the uncertainty analysis, three global sensitivity analysis (GSA) methods are utilized to quantify the contribution of an individual uncertain parameter to LCC and UC. The GSA results reveal that the feedstock price and the interest rate produce considerable effects on the TEAs. These results can provide a useful guide for entrepreneurs when they plan plants. PMID:25459861

  5. Global sensitivity analysis of a 3D street canyon model—Part II: Application and physical insight using sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Benson, James; Ziehn, Tilo; Dixon, Nick S.; Tomlin, Alison S.

    In this work global sensitivity studies using Monte Carlo sampling and high dimensional model representations (HDMR) have been carried out on the k- ɛ closure computational fluid dynamic (CFD) model MISKAM, allowing detailed representation of the effects of changing input parameters on the model outputs. The scenario studied is that of a complex street canyon in the city of York, UK. The sensitivity of the turbulence and mean flow fields to the input parameters is detailed both at specific measurement points and in the associated canyon cross-section to aid comparison with field data. This analysis gives insight into how model parameters can influence the predicted outputs. It also shows the relative strength of each parameter in its influence. Four main input parameters are addressed. Three parameters are surface roughness lengths, determining the flow over a surface, and the fourth is the background wind direction. In order to determine the relative importance of each parameter, sensitivity indices are calculated for the canyon cross-section. The sensitivity of the flow structures in and above the canyon to each parameter is found to be very location dependant. In general, at a particular measurement point, it is the closest wall surface that is most influential on the model output. However, due to the complexity of the flow at different wind angles this is not always the case, for example when a re-circulating canyon flow pattern is present. The background wind direction is shown to be an important parameter as it determines the surface features encountered by the flow. The accuracy with which this is specified when modelling a full-scale situation is therefore an important consideration when considering model uncertainty. Overall, the uncertainty due to roughness lengths is small in comparison to the mean outputs, indicating that the model is well defined even with large ranges of input parameter uncertainty.

  6. Sensitivity analysis of a global aerosol model to understand how parametric uncertainties affect model predictions

    NASA Astrophysics Data System (ADS)

    Lee, L. A.; Carslaw, K. S.; Pringle, K. J.

    2012-04-01

    Global aerosol contributions to radiative forcing (and hence climate change) are persistently subject to large uncertainty in successive Intergovernmental Panel on Climate Change (IPCC) reports (Schimel et al., 1996; Penner et al., 2001; Forster et al., 2007). As such more complex global aerosol models are being developed to simulate aerosol microphysics in the atmosphere. The uncertainty in global aerosol model estimates is currently estimated by measuring the diversity amongst different models (Textor et al., 2006, 2007; Meehl et al., 2007). The uncertainty at the process level due to the need to parameterise in such models is not yet understood and it is difficult to know whether the added model complexity comes at a cost of high model uncertainty. In this work the model uncertainty and its sources due to the uncertain parameters is quantified using variance-based sensitivity analysis. Due to the complexity of a global aerosol model we use Gaussian process emulation with a sufficient experimental design to make such as a sensitivity analysis possible. The global aerosol model used here is GLOMAP (Mann et al., 2010) and we quantify the sensitivity of numerous model outputs to 27 expertly elicited uncertain model parameters describing emissions and processes such as growth and removal of aerosol. Using the R package DiceKriging (Roustant et al., 2010) along with the package sensitivity (Pujol, 2008) it has been possible to produce monthly global maps of model sensitivity to the uncertain parameters over the year 2008. Global model outputs estimated by the emulator are shown to be consistent with previously published estimates (Spracklen et al. 2010, Mann et al. 2010) but now we have an associated measure of parameter uncertainty and its sources. It can be seen that globally some parameters have no effect on the model predictions and any further effort in their development may be unnecessary, although a structural error in the model might also be identified. The

  7. Global sensitivity analysis in wastewater treatment plant model applications: prioritizing sources of uncertainty.

    PubMed

    Sin, Gürkan; Gernaey, Krist V; Neumann, Marc B; van Loosdrecht, Mark C M; Gujer, Willi

    2011-01-01

    This study demonstrates the usefulness of global sensitivity analysis in wastewater treatment plant (WWTP) design to prioritize sources of uncertainty and quantify their impact on performance criteria. The study, which is performed with the Benchmark Simulation Model no. 1 plant design, complements a previous paper on input uncertainty characterisation and propagation (Sin et al., 2009). A sampling-based sensitivity analysis is conducted to compute standardized regression coefficients. It was found that this method is able to decompose satisfactorily the variance of plant performance criteria (with R(2) > 0.9) for effluent concentrations, sludge production and energy demand. This high extent of linearity means that the plant performance criteria can be described as linear functions of the model inputs under the defined plant conditions. In effect, the system of coupled ordinary differential equations can be replaced by multivariate linear models, which can be used as surrogate models. The importance ranking based on the sensitivity measures demonstrates that the most influential factors involve ash content and influent inert particulate COD among others, largely responsible for the uncertainty in predicting sludge production and effluent ammonium concentration. While these results were in agreement with process knowledge, the added value is that the global sensitivity methods can quantify the contribution of the variance of significant parameters, e.g., ash content explains 70% of the variance in sludge production. Further the importance of formulating appropriate sensitivity analysis scenarios that match the purpose of the model application needs to be highlighted. Overall, the global sensitivity analysis proved a powerful tool for explaining and quantifying uncertainties as well as providing insight into devising useful ways for reducing uncertainties in the plant performance. This information can help engineers design robust WWTP plants. PMID:20828785

  8. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Accuracy Analysis

    NASA Astrophysics Data System (ADS)

    Sarrazin, F.; Pianosi, F.; Hartmann, A. J.; Wagener, T.

    2014-12-01

    Sensitivity analysis aims to characterize the impact that changes in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). It is a valuable diagnostic tool for model understanding and for model improvement, it enhances calibration efficiency, and it supports uncertainty and scenario analysis. It is of particular interest for environmental models because they are often complex, non-linear, non-monotonic and exhibit strong interactions between their parameters. However, sensitivity analysis has to be carefully implemented to produce reliable results at moderate computational cost. For example, sample size can have a strong impact on the results and has to be carefully chosen. Yet, there is little guidance available for this step in environmental modelling. The objective of the present study is to provide guidelines for a robust sensitivity analysis, in order to support modellers in making appropriate choices for its implementation and in interpreting its outcome. We considered hydrological models with increasing level of complexity. We tested four sensitivity analysis methods, Regional Sensitivity Analysis, Method of Morris, a density-based (PAWN) and a variance-based (Sobol) method. The convergence and variability of sensitivity indices were investigated. We used bootstrapping to assess and improve the robustness of sensitivity indices even for limited sample sizes. Finally, we propose a quantitative validation approach for sensitivity analysis based on the Kolmogorov-Smirnov statistics.

  9. Variance-based global sensitivity analysis for multiple scenarios and models with implementation using sparse grid collocation

    NASA Astrophysics Data System (ADS)

    Dai, Heng; Ye, Ming

    2015-09-01

    Sensitivity analysis is a vital tool in hydrological modeling to identify influential parameters for inverse modeling and uncertainty analysis, and variance-based global sensitivity analysis has gained popularity. However, the conventional global sensitivity indices are defined with consideration of only parametric uncertainty. Based on a hierarchical structure of parameter, model, and scenario uncertainties and on recently developed techniques of model- and scenario-averaging, this study derives new global sensitivity indices for multiple models and multiple scenarios. To reduce computational cost of variance-based global sensitivity analysis, sparse grid collocation method is used to evaluate the mean and variance terms involved in the variance-based global sensitivity analysis. In a simple synthetic case of groundwater flow and reactive transport, it is demonstrated that the global sensitivity indices vary substantially between the four models and three scenarios. Not considering the model and scenario uncertainties, might result in biased identification of important model parameters. This problem is resolved by using the new indices defined for multiple models and/or multiple scenarios. This is particularly true when the sensitivity indices and model/scenario probabilities vary substantially. The sparse grid collocation method dramatically reduces the computational cost, in comparison with the popular quasi-random sampling method. The new framework of global sensitivity analysis is mathematically general, and can be applied to a wide range of hydrologic and environmental problems.

  10. Comparison of three different methods for global sensitivity analysis - application to a complex environmental model

    NASA Astrophysics Data System (ADS)

    Werisch, Stefan; Krause, Julia

    2014-05-01

    Complex environmental models which are able to consider the dynamic interactions between plants, soils and the environment are suitable tools to predict the impact of climate variability and climate change on the water budget of small catchments. Unfortunately increases the number of potential calibration parameters with increasing complexity of these models. Methods of global sensitivity analysis (GSA) are considered as helpful tools to identify the sensitive and therefore relevant model parameters which need to be considered in the optimization process. To assess the efficiency of these approaches, three different methods for GSA of model parameters, namely: (1) Mutual Entropy (ME), (2) Regional Sensitivity Analysis and (3) enhanced Fourier Amplitude Sensitivity Test (eFAST) have been tested and compared using the complex environmental model SWAP. The model was set up to simulate the water budget and soil water dynamics of a small experimental catchment in the Ore Mountains, Germany. Discharge and soil water content time series established the data basis for the sensitivity analysis. All three methods have been applied to investigate the sensitivity of the model parameters regarding the different data types, different model efficiency measures and different time resolutions for the calculation of the efficiency measures. The results indicate that GSA methods from which only the first order sensitivities, this means the sole influence of a specific parameter on the model output, can be obtained (ME & RSA) are unsuitable for complex environmental models. They identified less than 20% of the model parameters to be sensitive, while almost 80% of the model parameters were identified as sensitive on the basis of the total sensitivity index calculated by the eFAST method. Possible reasons for the failure of the first-order methods are the strong interactions of the parameters and the non-linear behavior of the model. A second important result of this study is that

  11. Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model

    NASA Astrophysics Data System (ADS)

    Urrego-Blanco, Jorge R.; Urban, Nathan M.; Hunke, Elizabeth C.; Turner, Adrian K.; Jeffery, Nicole

    2016-04-01

    Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. It is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.

  12. A Methodology For Performing Global Uncertainty And Sensitivity Analysis In Systems Biology

    PubMed Central

    Marino, Simeone; Hogue, Ian B.; Ray, Christian J.; Kirschner, Denise E.

    2008-01-01

    Accuracy of results from mathematical and computer models of biological systems is often complicated by the presence of uncertainties in experimental data that are used to estimate parameter values. Current mathematical modeling approaches typically use either single-parameter or local sensitivity analyses. However, these methods do not accurately assess uncertainty and sensitivity in the system as, by default they hold all other parameters fixed at baseline values. Using techniques described within we demonstrate how a multi-dimensional parameter space can be studied globally so all uncertainties can be identified. Further, uncertainty and sensitivity analysis techniques can help to identify and ultimately control uncertainties. In this work we develop methods for applying existing analytical tools to perform analyses on a variety of mathematical and computer models. We compare two specific types of global sensitivity analysis indexes that have proven to be among the most robust and efficient. Through familiar and new examples of mathematical and computer models, we provide a complete methodology for performing these analyses, both in deterministic and stochastic settings, and propose novel techniques to handle problems encountered during this type of analyses. PMID:18572196

  13. Global Sensitivity Analysis of Past Terrestrial Climates to astronomical forcing, CO2 and glaciation level

    NASA Astrophysics Data System (ADS)

    Crucifix, M.; Araya-Melo, P. A.

    2013-12-01

    The sensitivity of terrestrial climates of the southern hemisphere to astronomical forcing, CO2 and glaciation level is systematically investigated following a global sensitivity analysis. The approach is founded on analysis of about 100 experiments performed with the GCM HadCM3, statistically analysed using a Gaussian Process emulator. The presentation emphasises the importance of the selection of experiments (experiment design) and the validation of the statistical model. At the stage of writing the abstract only preliminary results have been obtained, but following an approach followed by our group for the Indian Monsoon and vegetation feedback analysis we expect to be able to show and discuss amplitude and phase relationships between changes in terrestrial environments of the southern hemisphere and driving factors.

  14. Quantitative global sensitivity analysis of the RZWQM to warrant a robust and effective calibration

    NASA Astrophysics Data System (ADS)

    Esmaeili, Sara; Thomson, Neil R.; Tolson, Bryan A.; Zebarth, Bernie J.; Kuchta, Shawn H.; Neilsen, Denise

    2014-04-01

    Sensitivity analysis is a useful tool to identify key model parameters as well as to quantify simulation errors resulting from parameter uncertainty. The Root Zone Water Quality Model (RZWQM) has been subjected to various sensitivity analyses; however, in most of these efforts a local sensitivity analysis method was implemented, the nonlinear response was neglected, and the dependency among parameters was not examined. In this study we employed a comprehensive global sensitivity analysis to quantify the contribution of 70 model input parameters (including 35 hydrological parameters and 35 nitrogen cycle parameters) on the uncertainty of key RZWQM outputs relevant to raspberry row crops in Abbotsford, BC, Canada. Specifically, 9 model outputs that capture various vertical-spatial and temporal domains were investigated. A rank transformation method was used to account for the nonlinear behavior of the model. The variance of the model outputs was decomposed into correlated and uncorrelated partial variances to provide insight into parameter dependency and interaction. The results showed that, in general, the field capacity (soil water content at -33 kPa) in upper 30 cm of the soil horizon had the greatest contribution (>30%) to the estimate of the water flux and evapotranspiration uncertainty. The most influential parameters affecting the simulation of soil nitrate content, mineralization, denitrification, nitrate leaching and plant nitrogen uptake were the transient coefficient of fast to intermediate humus pool, the carbon to nitrogen ratio of the fast humus pool, the organic matter decay rate in fast humus pool, and field capacity. The correlated contribution to the model output uncertainty was <10% for the set of parameters investigated. The findings from this effort were utilized in two calibration case studies to demonstrate the utility of this global sensitivity analysis to reduce the risk of over-parameterization, and to identify the vertical location of

  15. A Global Sensitivity Analysis Method on Maximum Tsunami Wave Heights to Potential Seismic Source Parameters

    NASA Astrophysics Data System (ADS)

    Ren, Luchuan

    2015-04-01

    A Global Sensitivity Analysis Method on Maximum Tsunami Wave Heights to Potential Seismic Source Parameters Luchuan Ren, Jianwei Tian, Mingli Hong Institute of Disaster Prevention, Sanhe, Heibei Province, 065201, P.R. China It is obvious that the uncertainties of the maximum tsunami wave heights in offshore area are partly from uncertainties of the potential seismic tsunami source parameters. A global sensitivity analysis method on the maximum tsunami wave heights to the potential seismic source parameters is put forward in this paper. The tsunami wave heights are calculated by COMCOT ( the Cornell Multi-grid Coupled Tsunami Model), on the assumption that an earthquake with magnitude MW8.0 occurred at the northern fault segment along the Manila Trench and triggered a tsunami in the South China Sea. We select the simulated results of maximum tsunami wave heights at specific sites in offshore area to verify the validity of the method proposed in this paper. For ranking importance order of the uncertainties of potential seismic source parameters (the earthquake's magnitude, the focal depth, the strike angle, dip angle and slip angle etc..) in generating uncertainties of the maximum tsunami wave heights, we chose Morris method to analyze the sensitivity of the maximum tsunami wave heights to the aforementioned parameters, and give several qualitative descriptions of nonlinear or linear effects of them on the maximum tsunami wave heights. We quantitatively analyze the sensitivity of the maximum tsunami wave heights to these parameters and the interaction effects among these parameters on the maximum tsunami wave heights by means of the extended FAST method afterward. The results shows that the maximum tsunami wave heights are very sensitive to the earthquake magnitude, followed successively by the epicenter location, the strike angle and dip angle, the interactions effect between the sensitive parameters are very obvious at specific site in offshore area, and there

  16. Global sensitivity analysis of a filtration model for submerged anaerobic membrane bioreactors (AnMBR).

    PubMed

    Robles, A; Ruano, M V; Ribes, J; Seco, A; Ferrer, J

    2014-04-01

    The results of a global sensitivity analysis of a filtration model for submerged anaerobic MBRs (AnMBRs) are assessed in this paper. This study aimed to (1) identify the less- (or non-) influential factors of the model in order to facilitate model calibration and (2) validate the modelling approach (i.e. to determine the need for each of the proposed factors to be included in the model). The sensitivity analysis was conducted using a revised version of the Morris screening method. The dynamic simulations were conducted using long-term data obtained from an AnMBR plant fitted with industrial-scale hollow-fibre membranes. Of the 14 factors in the model, six were identified as influential, i.e. those calibrated using off-line protocols. A dynamic calibration (based on optimisation algorithms) of these influential factors was conducted. The resulting estimated model factors accurately predicted membrane performance. PMID:24650614

  17. Global sensitivity analysis of ozone, HO2, and OH during ARCTAS campaign

    NASA Astrophysics Data System (ADS)

    Christian, K. E.; Mao, J.; Brune, W. H.

    2015-12-01

    Modeling the chemical state of the atmosphere is a complicated endeavor due to the complex, non-linear interactions between meteorology, emissions, and kinetics that govern trace gas concentrations. Given the rapid environmental changes taking place, the Arctic is one area of particular interest with regards to climate and atmospheric composition. To observe these changes to the Arctic atmosphere, NASA funded the Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS) campaign (2008). As part of the mission, measurements of oxidative factors (hydroxyl (OH) and hydroperoxyl (HO2) abundances) were taken using the Airborne Tropospheric Hydrogen Oxides Sensor (ATHOS) aboard the NASA DC-8. Using GEOS-Chem, a popular global chemical transport model, we perform a global sensitivity analysis for the period of the ARCTAS campaign, allowing for non-linear interactions between input factors to be accounted and quantified in the analysis. Sensitivities are determined for around 50 model input factors and for combinations of pairs of input factors using the Random Sampling - High Dimensional Model Representation (RS-HDMR) method. We calculate the uncertainty in these oxidative factors, and in ozone, ozone production rate, and hydroxyl production rate and find the sensitivity of these oxidative factors and the differences between the measured and modeled oxidative factors to model inputs in meteorology, emissions, and chemistry. This presentation will include a solid estimate of GEOS-Chem model uncertainty for the period of the ARCTAS campaign, the emissions, meteorology, or chemistry to which oxidative properties are most sensitive for these periods, and the factors to which the differences between the modeled and measured oxidative factors are most sensitive.

  18. SAFE(R): A Matlab/Octave Toolbox (and R Package) for Global Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Pianosi, Francesca; Sarrazin, Fanny; Gollini, Isabella; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis (GSA) is increasingly used in the development and assessment of hydrological models, as well as for dominant control analysis and for scenario discovery to support water resource management under deep uncertainty. Here we present a toolbox for the application of GSA, called SAFE (Sensitivity Analysis For Everybody) that implements several established GSA methods, including method of Morris, Regional Sensitivity Analysis, variance-based sensitivity Analysis (Sobol') and FAST. It also includes new approaches and visualization tools to complement these established methods. The Toolbox is released in two versions, one running under Matlab/Octave (called SAFE) and one running in R (called SAFER). Thanks to its modular structure, SAFE(R) can be easily integrated with other toolbox and packages, and with models running in a different computing environment. Another interesting feature of SAFE(R) is that all the implemented methods include specific functions for assessing the robustness and convergence of the sensitivity estimates. Furthermore, SAFE(R) includes numerous visualisation tools for the effective investigation and communication of GSA results. The toolbox is designed to make GSA accessible to non-specialist users, and to provide a fully commented code for more experienced users to complement their own tools. The documentation includes a set of workflow scripts with practical guidelines on how to apply GSA and how to use the toolbox. SAFE(R) is open source and freely available from the following website: http://bristol.ac.uk/cabot/resources/safe-toolbox/ Ultimately, SAFE(R) aims at improving the diffusion and quality of GSA practice in the hydrological modelling community.

  19. Toward a more robust variance-based global sensitivity analysis of model outputs

    SciTech Connect

    Tong, C

    2007-10-15

    Global sensitivity analysis (GSA) measures the variation of a model output as a function of the variations of the model inputs given their ranges. In this paper we consider variance-based GSA methods that do not rely on certain assumptions about the model structure such as linearity or monotonicity. These variance-based methods decompose the output variance into terms of increasing dimensionality called 'sensitivity indices', first introduced by Sobol' [25]. Sobol' developed a method of estimating these sensitivity indices using Monte Carlo simulations. McKay [13] proposed an efficient method using replicated Latin hypercube sampling to compute the 'correlation ratios' or 'main effects', which have been shown to be equivalent to Sobol's first-order sensitivity indices. Practical issues with using these variance estimators are how to choose adequate sample sizes and how to assess the accuracy of the results. This paper proposes a modified McKay main effect method featuring an adaptive procedure for accuracy assessment and improvement. We also extend our adaptive technique to the computation of second-order sensitivity indices. Details of the proposed adaptive procedure as wells as numerical results are included in this paper.

  20. A New Framework for Effective and Efficient Global Sensitivity Analysis of Earth and Environmental Systems Models

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin

    2015-04-01

    Earth and Environmental Systems (EES) models are essential components of research, development, and decision-making in science and engineering disciplines. With continuous advances in understanding and computing power, such models are becoming more complex with increasingly more factors to be specified (model parameters, forcings, boundary conditions, etc.). To facilitate better understanding of the role and importance of different factors in producing the model responses, the procedure known as 'Sensitivity Analysis' (SA) can be very helpful. Despite the availability of a large body of literature on the development and application of various SA approaches, two issues continue to pose major challenges: (1) Ambiguous Definition of Sensitivity - Different SA methods are based in different philosophies and theoretical definitions of sensitivity, and can result in different, even conflicting, assessments of the underlying sensitivities for a given problem, (2) Computational Cost - The cost of carrying out SA can be large, even excessive, for high-dimensional problems and/or computationally intensive models. In this presentation, we propose a new approach to sensitivity analysis that addresses the dual aspects of 'effectiveness' and 'efficiency'. By effective, we mean achieving an assessment that is both meaningful and clearly reflective of the objective of the analysis (the first challenge above), while by efficiency we mean achieving statistically robust results with minimal computational cost (the second challenge above). Based on this approach, we develop a 'global' sensitivity analysis framework that efficiently generates a newly-defined set of sensitivity indices that characterize a range of important properties of metric 'response surfaces' encountered when performing SA on EES models. Further, we show how this framework embraces, and is consistent with, a spectrum of different concepts regarding 'sensitivity', and that commonly-used SA approaches (e.g., Sobol

  1. Global Sensitivity Analysis and Parameter Calibration for an Ecosystem Carbon Model

    NASA Astrophysics Data System (ADS)

    Safta, C.; Ricciuto, D. M.; Sargsyan, K.; Najm, H. N.; Debusschere, B.; Thornton, P. E.

    2013-12-01

    We present uncertainty quantification results for a process-based ecosystem carbon model. The model employs 18 parameters and is driven by meteorological data corresponding to years 1992-2006 at the Harvard Forest site. Daily Net Ecosystem Exchange (NEE) observations were available to calibrate the model parameters and test the performance of the model. Posterior distributions show good predictive capabilities for the calibrated model. A global sensitivity analysis was first performed to determine the important model parameters based on their contribution to the variance of NEE. We then proceed to calibrate the model parameters in a Bayesian framework. The daily discrepancies between measured and predicted NEE values were modeled as independent and identically distributed Gaussians with prescribed daily variance according to the recorded instrument error. All model parameters were assumed to have uninformative priors with bounds set according to expert opinion. The global sensitivity results show that the rate of leaf fall (LEAFALL) is responsible for approximately 25% of the total variance in the average NEE for 1992-2005. A set of 4 other parameters, Nitrogen use efficiency (NUE), base rate for maintenance respiration (BR_MR), growth respiration fraction (RG_FRAC), and allocation to plant stem pool (ASTEM) contribute between 5% and 12% to the variance in average NEE, while the rest of the parameters have smaller contributions. The posterior distributions, sampled with a Markov Chain Monte Carlo algorithm, exhibit significant correlations between model parameters. However LEAFALL, the most important parameter for the average NEE, is not informed by the observational data, while less important parameters show significant updates between their prior and posterior densities. The Fisher information matrix values, indicating which parameters are most informed by the experimental observations, are examined to augment the comparison between the calibration and global

  2. Comparison of two parameterizations of a turbulence-induced flocculation model through global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Cottereau, R.; Rochinha, F. A.; Coutinho, A. L. G. A.

    2014-08-01

    This paper describes a global sensitivity analysis of a fractal-based turbulence-induced flocculation model. The quantities of interest in this analysis are related to the floc diameters in two different configurations. The input parameters with which the sensitivity analyses are performed are the floc aggregation and breakup parameters, the fractal dimension and the diameter of the primary particles. Two related versions of the flocculation model are considered, evenly encountered in the literature: (i) using a dimensional floc breakup parameter, and (ii) using a non-dimensional floc breakup parameter. The main results of the sensitivity analyses are that only two parameters of model (ii) are significant (aggregation and breakup parameters) and that the relationships between parameter and quantity of interest remain simple. Contrarily, with model (i), all parameters have to be considered. When identifying model parameters based on measures of floc diameters, this analysis hence suggests the use of model (ii) rather than (i). Further, improved models of the fractal dimension do not seem to be required when using the non-dimensional model (ii).

  3. Global sensitivity analysis for urban water quality modelling: Terminology, convergence and comparison of different methods

    NASA Astrophysics Data System (ADS)

    Vanrolleghem, Peter A.; Mannina, Giorgio; Cosenza, Alida; Neumann, Marc B.

    2015-03-01

    Sensitivity analysis represents an important step in improving the understanding and use of environmental models. Indeed, by means of global sensitivity analysis (GSA), modellers may identify both important (factor prioritisation) and non-influential (factor fixing) model factors. No general rule has yet been defined for verifying the convergence of the GSA methods. In order to fill this gap this paper presents a convergence analysis of three widely used GSA methods (SRC, Extended FAST and Morris screening) for an urban drainage stormwater quality-quantity model. After the convergence was achieved the results of each method were compared. In particular, a discussion on peculiarities, applicability, and reliability of the three methods is presented. Moreover, a graphical Venn diagram based classification scheme and a precise terminology for better identifying important, interacting and non-influential factors for each method is proposed. In terms of convergence, it was shown that sensitivity indices related to factors of the quantity model achieve convergence faster. Results for the Morris screening method deviated considerably from the other methods. Factors related to the quality model require a much higher number of simulations than the number suggested in literature for achieving convergence with this method. In fact, the results have shown that the term "screening" is improperly used as the method may exclude important factors from further analysis. Moreover, for the presented application the convergence analysis shows more stable sensitivity coefficients for the Extended-FAST method compared to SRC and Morris screening. Substantial agreement in terms of factor fixing was found between the Morris screening and Extended FAST methods. In general, the water quality related factors exhibited more important interactions than factors related to water quantity. Furthermore, in contrast to water quantity model outputs, water quality model outputs were found to be

  4. A Protocol for the Global Sensitivity Analysis of Impact Assessment Models in Life Cycle Assessment.

    PubMed

    Cucurachi, S; Borgonovo, E; Heijungs, R

    2016-02-01

    The life cycle assessment (LCA) framework has established itself as the leading tool for the assessment of the environmental impact of products. Several works have established the need of integrating the LCA and risk analysis methodologies, due to the several common aspects. One of the ways to reach such integration is through guaranteeing that uncertainties in LCA modeling are carefully treated. It has been claimed that more attention should be paid to quantifying the uncertainties present in the various phases of LCA. Though the topic has been attracting increasing attention of practitioners and experts in LCA, there is still a lack of understanding and a limited use of the available statistical tools. In this work, we introduce a protocol to conduct global sensitivity analysis in LCA. The article focuses on the life cycle impact assessment (LCIA), and particularly on the relevance of global techniques for the development of trustable impact assessment models. We use a novel characterization model developed for the quantification of the impacts of noise on humans as a test case. We show that global SA is fundamental to guarantee that the modeler has a complete understanding of: (i) the structure of the model and (ii) the importance of uncertain model inputs and the interaction among them. PMID:26595377

  5. Reduction and Uncertainty Analysis of Chemical Mechanisms Based on Local and Global Sensitivities

    NASA Astrophysics Data System (ADS)

    Esposito, Gaetano

    Numerical simulations of critical reacting flow phenomena in hypersonic propulsion devices require accurate representation of finite-rate chemical kinetics. The chemical kinetic models available for hydrocarbon fuel combustion are rather large, involving hundreds of species and thousands of reactions. As a consequence, they cannot be used in multi-dimensional computational fluid dynamic calculations in the foreseeable future due to the prohibitive computational cost. In addition to the computational difficulties, it is also known that some fundamental chemical kinetic parameters of detailed models have significant level of uncertainty due to limited experimental data available and to poor understanding of interactions among kinetic parameters. In the present investigation, local and global sensitivity analysis techniques are employed to develop a systematic approach of reducing and analyzing detailed chemical kinetic models. Unlike previous studies in which skeletal model reduction was based on the separate analysis of simple cases, in this work a novel strategy based on Principal Component Analysis of local sensitivity values is presented. This new approach is capable of simultaneously taking into account all the relevant canonical combustion configurations over different composition, temperature and pressure conditions. Moreover, the procedure developed in this work represents the first documented inclusion of non-premixed extinction phenomena, which is of great relevance in hypersonic combustors, in an automated reduction algorithm. The application of the skeletal reduction to a detailed kinetic model consisting of 111 species in 784 reactions is demonstrated. The resulting reduced skeletal model of 37--38 species showed that the global ignition/propagation/extinction phenomena of ethylene-air mixtures can be predicted within an accuracy of 2% of the full detailed model. The problems of both understanding non-linear interactions between kinetic parameters and

  6. A new framework for comprehensive, robust, and efficient global sensitivity analysis: 2. Application

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin V.

    2016-01-01

    Based on the theoretical framework for sensitivity analysis called "Variogram Analysis of Response Surfaces" (VARS), developed in the companion paper, we develop and implement a practical "star-based" sampling strategy (called STAR-VARS), for the application of VARS to real-world problems. We also develop a bootstrap approach to provide confidence level estimates for the VARS sensitivity metrics and to evaluate the reliability of inferred factor rankings. The effectiveness, efficiency, and robustness of STAR-VARS are demonstrated via two real-data hydrological case studies (a 5-parameter conceptual rainfall-runoff model and a 45-parameter land surface scheme hydrology model), and a comparison with the "derivative-based" Morris and "variance-based" Sobol approaches are provided. Our results show that STAR-VARS provides reliable and stable assessments of "global" sensitivity across the full range of scales in the factor space, while being 1-2 orders of magnitude more efficient than the Morris or Sobol approaches.

  7. Visualization of Global Sensitivity Analysis Results Based on a Combination of Linearly Dependent and Independent Directions

    NASA Technical Reports Server (NTRS)

    Davies, Misty D.; Gundy-Burlet, Karen

    2010-01-01

    A useful technique for the validation and verification of complex flight systems is Monte Carlo Filtering -- a global sensitivity analysis that tries to find the inputs and ranges that are most likely to lead to a subset of the outputs. A thorough exploration of the parameter space for complex integrated systems may require thousands of experiments and hundreds of controlled and measured variables. Tools for analyzing this space often have limitations caused by the numerical problems associated with high dimensionality and caused by the assumption of independence of all of the dimensions. To combat both of these limitations, we propose a technique that uses a combination of the original variables with the derived variables obtained during a principal component analysis.

  8. Global sensitivity analysis of a dynamic model for gene expression in Drosophila embryos

    PubMed Central

    McCarthy, Gregory D.; Drewell, Robert A.

    2015-01-01

    It is well known that gene regulation is a tightly controlled process in early organismal development. However, the roles of key processes involved in this regulation, such as transcription and translation, are less well understood, and mathematical modeling approaches in this field are still in their infancy. In recent studies, biologists have taken precise measurements of protein and mRNA abundance to determine the relative contributions of key factors involved in regulating protein levels in mammalian cells. We now approach this question from a mathematical modeling perspective. In this study, we use a simple dynamic mathematical model that incorporates terms representing transcription, translation, mRNA and protein decay, and diffusion in an early Drosophila embryo. We perform global sensitivity analyses on this model using various different initial conditions and spatial and temporal outputs. Our results indicate that transcription and translation are often the key parameters to determine protein abundance. This observation is in close agreement with the experimental results from mammalian cells for various initial conditions at particular time points, suggesting that a simple dynamic model can capture the qualitative behavior of a gene. Additionally, we find that parameter sensitivites are temporally dynamic, illustrating the importance of conducting a thorough global sensitivity analysis across multiple time points when analyzing mathematical models of gene regulation. PMID:26157608

  9. What do we mean by sensitivity analysis? The need for comprehensive characterization of "global" sensitivity in Earth and Environmental systems models

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin V.

    2015-05-01

    Sensitivity analysis is an essential paradigm in Earth and Environmental Systems modeling. However, the term "sensitivity" has a clear definition, based in partial derivatives, only when specified locally around a particular point (e.g., optimal solution) in the problem space. Accordingly, no unique definition exists for "global sensitivity" across the problem space, when considering one or more model responses to different factors such as model parameters or forcings. A variety of approaches have been proposed for global sensitivity analysis, based on different philosophies and theories, and each of these formally characterizes a different "intuitive" understanding of sensitivity. These approaches focus on different properties of the model response at a fundamental level and may therefore lead to different (even conflicting) conclusions about the underlying sensitivities. Here we revisit the theoretical basis for sensitivity analysis, summarize and critically evaluate existing approaches in the literature, and demonstrate their flaws and shortcomings through conceptual examples. We also demonstrate the difficulty involved in interpreting "global" interaction effects, which may undermine the value of existing interpretive approaches. With this background, we identify several important properties of response surfaces that are associated with the understanding and interpretation of sensitivities in the context of Earth and Environmental System models. Finally, we highlight the need for a new, comprehensive framework for sensitivity analysis that effectively characterizes all of the important sensitivity-related properties of model response surfaces.

  10. A Global Analysis of CYP51 Diversity and Azole Sensitivity in Rhynchosporium commune.

    PubMed

    Brunner, Patrick C; Stefansson, Tryggvi S; Fountaine, James; Richina, Veronica; McDonald, Bruce A

    2016-04-01

    CYP51 encodes the target site of the azole class of fungicides widely used in plant protection. Some ascomycete pathogens carry two CYP51 paralogs called CYP51A and CYP51B. A recent analysis of CYP51 sequences in 14 European isolates of the barley scald pathogen Rhynchosporium commune revealed three CYP51 paralogs, CYP51A, CYP51B, and a pseudogene called CYP51A-p. The same analysis showed that CYP51A exhibits a presence/absence polymorphism, with lower sensitivity to azole fungicides associated with the presence of a functional CYP51A. We analyzed a global collection of nearly 400 R. commune isolates to determine if these findings could be extended beyond Europe. Our results strongly support the hypothesis that CYP51A played a key role in the emergence of azole resistance globally and provide new evidence that the CYP51A gene in R. commune has further evolved, presumably in response to azole exposure. We also present evidence for recent long-distance movement of evolved CYP51A alleles, highlighting the risk associated with movement of fungicide resistance alleles among international trading partners. PMID:26623995

  11. Global sensitivity analysis, probabilistic calibration, and predictive assessment for the data assimilation linked ecosystem carbon model

    NASA Astrophysics Data System (ADS)

    Safta, C.; Ricciuto, D. M.; Sargsyan, K.; Debusschere, B.; Najm, H. N.; Williams, M.; Thornton, P. E.

    2015-07-01

    In this paper we propose a probabilistic framework for an uncertainty quantification (UQ) study of a carbon cycle model and focus on the comparison between steady-state and transient simulation setups. A global sensitivity analysis (GSA) study indicates the parameters and parameter couplings that are important at different times of the year for quantities of interest (QoIs) obtained with the data assimilation linked ecosystem carbon (DALEC) model. We then employ a Bayesian approach and a statistical model error term to calibrate the parameters of DALEC using net ecosystem exchange (NEE) observations at the Harvard Forest site. The calibration results are employed in the second part of the paper to assess the predictive skill of the model via posterior predictive checks.

  12. Designing novel cellulase systems through agent-based modeling and global sensitivity analysis

    PubMed Central

    Apte, Advait A; Senger, Ryan S; Fong, Stephen S

    2014-01-01

    Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement. PMID:24830736

  13. Global sensitivity analysis of complex numerical landslide models based on Gaussian-Process meta-modelling

    NASA Astrophysics Data System (ADS)

    Rohmer, J.; Foerster, E.

    2012-04-01

    Large-scale landslide prediction is typically based on numerical modeling, with computer codes generally involving a large number of input parameters. Addressing the influence of each of them on the final result and providing a ranking procedure may be useful for risk management purposes, especially to guide future lab or in site characterizations and studies, but also to simplify the model by fixing the input parameters, which have negligible influence. Variance-based global sensitivity analysis relying on the Sobol' indices can provide such valuable information and presents the advantages of exploring the sensitivity to input parameters over their whole range of variation (i.e. in a global manner), of fully accounting for possible interaction between them and of being applicable without introducing a priori assumptions on the mathematical formulation of the landslide model. Nevertheless, such analysis require a large number of computer code simulations (typically a thousand), which appears impracticable for computationally demanding simulations, with computation times ranging from several hours to several days. To overcome this difficulty, we propose a ''meta-model''-based strategy consisting in replacing the complex simulator by a "costless-to-evaluate" statistical approximation (i.e. emulator) provided by a Gaussian-Process (GP) model. This allows computation of sensitivity measures from a limited number of simulations. This meta-modelling strategy is demonstrated on two cases. The first application is a simple analytical model based on the infinite slope analysis, which allows to compare the sensitivity measures computed using the ''true'' model with those computed using the GP meta-model. The second application aims at ranking in terms of importance the properties of the elasto-plastic model describing the complex behaviour of the slip surface in the "La Frasse" landslide (Switzerland). This case is more challenging as a single simulation requires at least 4

  14. Spatial heterogeneity and sensitivity analysis of crop virtual water content at a global scale

    NASA Astrophysics Data System (ADS)

    Tuninetti, Marta; Tamea, Stefania; D'Odorico, Paolo; Laio, Francesco; Ridolfi, Luca

    2015-04-01

    In this study, the green and blue virtual water content (VWC) of four staple crops (i.e., wheat, rice, maize, and soybean) is quantified at a high resolution scale, for the period 1996-2005, and a sensitivity analysis is performed for model parameters. In each grid cell, the crop VWC is obtained by the ratio between the total crop evapotranspiration over the growing season and the crop actual yield. The evapotranspiration is determined with a daily soil water balance that takes into account crop and soil properties, production conditions, and climate. The actual yield is estimated using country-based values provided by the FAOSTAT database multiplied by a coefficient adjusting for the spatial variability within countries. The model improves on previous works by using the newest available data and including multi-cropping practices in the evaluation. The overall water use (blue+green) for the global production of the four grains investigated is 2673 km3/yr. Food production almost entirely depends on green water (>90%), but, when applied, irrigation makes production more water efficient, thus requiring lower VWC. The spatial variability of the virtual water content is partly driven by the yield pattern with an average correlation coefficient of 0.83, and partly by reference evapotranspiration with correlation coefficient of 0.27. Wheat shows the highest spatial variability since it is grown under a wide range of climatic conditions, soil properties, and agricultural practices. The sensitivity analysis is performed to understand how uncertainties in input data propagate and impact the virtual water content accounting. In each cell fixed changes are introduced to one input parameters at a time, and a sensitivity index, SI, is determined as the ratio between the variation of VWC referred to its baseline value and the variation of the input parameter with respect to its reference value. VWC is found to be most sensitive to planting date (PD), followed by the length of

  15. Maximising the value of computer experiments using multi-method global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Pianosi, F.; Iwema, J.; Rosolem, R.; Wagener, T.

    2015-12-01

    Global Sensitivity Analysis (GSA) is increasingly recognised as an essential technique for a structured and quantitative approach to the calibration and diagnostic evaluation of environmental models. However, the implementation and interpretation of GSA is complicated by a number of choices that users need to make and for which multiple, equally sensible, options are often available. These choices include in the first place the choice of the GSA method, as well as many implementation details like the definition of the sampling space and strategy. The issue is exacerbated by computational complexity, in terms of both computing time and storage space needed to run the model, which might strongly constrain the number of experiments that can be afforded. While several algorithmic improvements can be adopted to reduce the computing burden of specific GSA methods, in this talk we discuss how a multi-method approach can be established to maximise the information gathered from an individual sample of model evaluations. Using as an example the GSA of a land surface model, we show how different analytical and approximation techniques can be applied sequentially to the same sample of model inputs and outputs, providing complimentary information about the model behaviour from different angles, and allowing for testing the impact of the choices made to generate the sample. We further expand our analysis to show how GSA is interconnected with model calibration and uncertainty analysis, so that a careful design of the simulation experiment can be used to address different questions simultaneously.

  16. Global sensitivity analysis and Bayesian parameter inference for solute transport in porous media colonized by biofilms.

    PubMed

    Younes, A; Delay, F; Fajraoui, N; Fahs, M; Mara, T A

    2016-08-01

    The concept of dual flowing continuum is a promising approach for modeling solute transport in porous media that includes biofilm phases. The highly dispersed transit time distributions often generated by these media are taken into consideration by simply stipulating that advection-dispersion transport occurs through both the porous and the biofilm phases. Both phases are coupled but assigned with contrasting hydrodynamic properties. However, the dual flowing continuum suffers from intrinsic equifinality in the sense that the outlet solute concentration can be the result of several parameter sets of the two flowing phases. To assess the applicability of the dual flowing continuum, we investigate how the model behaves with respect to its parameters. For the purpose of this study, a Global Sensitivity Analysis (GSA) and a Statistical Calibration (SC) of model parameters are performed for two transport scenarios that differ by the strength of interaction between the flowing phases. The GSA is shown to be a valuable tool to understand how the complex system behaves. The results indicate that the rate of mass transfer between the two phases is a key parameter of the model behavior and influences the identifiability of the other parameters. For weak mass exchanges, the output concentration is mainly controlled by the velocity in the porous medium and by the porosity of both flowing phases. In the case of large mass exchanges, the kinetics of this exchange also controls the output concentration. The SC results show that transport with large mass exchange between the flowing phases is more likely affected by equifinality than transport with weak exchange. The SC also indicates that weakly sensitive parameters, such as the dispersion in each phase, can be accurately identified. Removing them from calibration procedures is not recommended because it might result in biased estimations of the highly sensitive parameters. PMID:27182791

  17. Global sensitivity analysis and Bayesian parameter inference for solute transport in porous media colonized by biofilms

    NASA Astrophysics Data System (ADS)

    Younes, A.; Delay, F.; Fajraoui, N.; Fahs, M.; Mara, T. A.

    2016-08-01

    The concept of dual flowing continuum is a promising approach for modeling solute transport in porous media that includes biofilm phases. The highly dispersed transit time distributions often generated by these media are taken into consideration by simply stipulating that advection-dispersion transport occurs through both the porous and the biofilm phases. Both phases are coupled but assigned with contrasting hydrodynamic properties. However, the dual flowing continuum suffers from intrinsic equifinality in the sense that the outlet solute concentration can be the result of several parameter sets of the two flowing phases. To assess the applicability of the dual flowing continuum, we investigate how the model behaves with respect to its parameters. For the purpose of this study, a Global Sensitivity Analysis (GSA) and a Statistical Calibration (SC) of model parameters are performed for two transport scenarios that differ by the strength of interaction between the flowing phases. The GSA is shown to be a valuable tool to understand how the complex system behaves. The results indicate that the rate of mass transfer between the two phases is a key parameter of the model behavior and influences the identifiability of the other parameters. For weak mass exchanges, the output concentration is mainly controlled by the velocity in the porous medium and by the porosity of both flowing phases. In the case of large mass exchanges, the kinetics of this exchange also controls the output concentration. The SC results show that transport with large mass exchange between the flowing phases is more likely affected by equifinality than transport with weak exchange. The SC also indicates that weakly sensitive parameters, such as the dispersion in each phase, can be accurately identified. Removing them from calibration procedures is not recommended because it might result in biased estimations of the highly sensitive parameters.

  18. Quantitative global sensitivity analysis of a biologically based dose-response pregnancy model for the thyroid endocrine system

    PubMed Central

    Lumen, Annie; McNally, Kevin; George, Nysia; Fisher, Jeffrey W.; Loizou, George D.

    2015-01-01

    A deterministic biologically based dose-response model for the thyroidal system in a near-term pregnant woman and the fetus was recently developed to evaluate quantitatively thyroid hormone perturbations. The current work focuses on conducting a quantitative global sensitivity analysis on this complex model to identify and characterize the sources and contributions of uncertainties in the predicted model output. The workflow and methodologies suitable for computationally expensive models, such as the Morris screening method and Gaussian Emulation processes, were used for the implementation of the global sensitivity analysis. Sensitivity indices, such as main, total and interaction effects, were computed for a screened set of the total thyroidal system descriptive model input parameters. Furthermore, a narrower sub-set of the most influential parameters affecting the model output of maternal thyroid hormone levels were identified in addition to the characterization of their overall and pair-wise parameter interaction quotients. The characteristic trends of influence in model output for each of these individual model input parameters over their plausible ranges were elucidated using Gaussian Emulation processes. Through global sensitivity analysis we have gained a better understanding of the model behavior and performance beyond the domains of observation by the simultaneous variation in model inputs over their range of plausible uncertainties. The sensitivity analysis helped identify parameters that determine the driving mechanisms of the maternal and fetal iodide kinetics, thyroid function and their interactions, and contributed to an improved understanding of the system modeled. We have thus demonstrated the use and application of global sensitivity analysis for a biologically based dose-response model for sensitive life-stages such as pregnancy that provides richer information on the model and the thyroidal system modeled compared to local sensitivity analysis

  19. A comparison of five forest interception models using global sensitivity and uncertainty analysis

    NASA Astrophysics Data System (ADS)

    Linhoss, Anna C.; Siegert, Courtney M.

    2016-07-01

    Interception by the forest canopy plays a critical role in the hydrologic cycle by removing a significant portion of incoming precipitation from the terrestrial component. While there are a number of existing physical models of forest interception, few studies have summarized or compared these models. The objective of this work is to use global sensitivity and uncertainty analysis to compare five mechanistic interception models including the Rutter, Rutter Sparse, Gash, Sparse Gash, and Liu models. Using parameter probability distribution functions of values from the literature, our results show that on average storm duration [Dur], gross precipitation [PG], canopy storage [S] and solar radiation [Rn] are the most important model parameters. On the other hand, empirical parameters used in calculating evaporation and drip (i.e. trunk evaporation as a proportion of evaporation from the saturated canopy [ɛ], the empirical drainage parameter [b], the drainage partitioning coefficient [pd], and the rate of water dripping from the canopy when canopy storage has been reached [Ds]) have relatively low levels of importance in interception modeling. As such, future modeling efforts should aim to decompose parameters that are the most influential in determining model outputs into easily measurable physical components. Because this study compares models, the choices regarding the parameter probability distribution functions are applied across models, which enables a more definitive ranking of model uncertainty.

  20. Exploring the impact of forcing error characteristics on physically based snow simulations within a global sensitivity analysis framework

    NASA Astrophysics Data System (ADS)

    Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.

    2014-12-01

    Physically based models provide insights into key hydrologic processes, but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology. Here we employ global sensitivity analysis to explore how different error types (i.e., bias, random errors), different error distributions, and different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use Sobol' global sensitivity analysis, which is typically used for model parameters, but adapted here for testing model sensitivity to co-existing errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 520 000 Monte Carlo simulations across four sites and four different scenarios. Model outputs were generally (1) more sensitive to forcing biases than random errors, (2) less sensitive to forcing error distributions, and (3) sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a significant impact depending on forcing error magnitudes. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.

  1. Global sensitivity analysis for an integrated model for simulation of nitrogen dynamics under the irrigation with treated wastewater.

    PubMed

    Sun, Huaiwei; Zhu, Yan; Yang, Jinzhong; Wang, Xiugui

    2015-11-01

    As the amount of water resources that can be utilized for agricultural production is limited, the reuse of treated wastewater (TWW) for irrigation is a practical solution to alleviate the water crisis in China. The process-based models, which estimate nitrogen dynamics under irrigation, are widely used to investigate the best irrigation and fertilization management practices in developed and developing countries. However, for modeling such a complex system for wastewater reuse, it is critical to conduct a sensitivity analysis to determine numerous input parameters and their interactions that contribute most to the variance of the model output for the development of process-based model. In this study, application of a comprehensive global sensitivity analysis for nitrogen dynamics was reported. The objective was to compare different global sensitivity analysis (GSA) on the key parameters for different model predictions of nitrogen and crop growth modules. The analysis was performed as two steps. Firstly, Morris screening method, which is one of the most commonly used screening method, was carried out to select the top affected parameters; then, a variance-based global sensitivity analysis method (extended Fourier amplitude sensitivity test, EFAST) was used to investigate more thoroughly the effects of selected parameters on model predictions. The results of GSA showed that strong parameter interactions exist in crop nitrogen uptake, nitrogen denitrification, crop yield, and evapotranspiration modules. Among all parameters, one of the soil physical-related parameters named as the van Genuchten air entry parameter showed the largest sensitivity effects on major model predictions. These results verified that more effort should be focused on quantifying soil parameters for more accurate model predictions in nitrogen- and crop-related predictions, and stress the need to better calibrate the model in a global sense. This study demonstrates the advantages of the GSA on a

  2. Global Sensitivity Analysis for Large-scale Socio-hydrological Models using the Cloud

    NASA Astrophysics Data System (ADS)

    Hu, Y.; Garcia-Cabrejo, O.; Cai, X.; Valocchi, A. J.; Dupont, B.

    2014-12-01

    In the context of coupled human and natural system (CHNS), incorporating human factors into water resource management provides us with the opportunity to understand the interactions between human and environmental systems. A multi-agent system (MAS) model is designed to couple with the physically-based Republican River Compact Administration (RRCA) groundwater model, in an attempt to understand the declining water table and base flow in the heavily irrigated Republican River basin. For MAS modelling, we defined five behavioral parameters (κ_pr, ν_pr, κ_prep, ν_prep and λ) to characterize the agent's pumping behavior given the uncertainties of the future crop prices and precipitation. κ and ν describe agent's beliefs in their prior knowledge of the mean and variance of crop prices (κ_pr, ν_pr) and precipitation (κ_prep, ν_prep), and λ is used to describe the agent's attitude towards the fluctuation of crop profits. Notice that these human behavioral parameters as inputs to the MAS model are highly uncertain and even not measurable. Thus, we estimate the influences of these behavioral parameters on the coupled models using Global Sensitivity Analysis (GSA). In this paper, we address two main challenges arising from GSA with such a large-scale socio-hydrological model by using Hadoop-based Cloud Computing techniques and Polynomial Chaos Expansion (PCE) based variance decomposition approach. As a result, 1,000 scenarios of the coupled models are completed within two hours with the Hadoop framework, rather than about 28days if we run those scenarios sequentially. Based on the model results, GSA using PCE is able to measure the impacts of the spatial and temporal variations of these behavioral parameters on crop profits and water table, and thus identifies two influential parameters, κ_pr and λ. The major contribution of this work is a methodological framework for the application of GSA in large-scale socio-hydrological models. This framework attempts to

  3. Global sensitivity analysis of a flocculation model for turbidity currents f

    NASA Astrophysics Data System (ADS)

    Rochinha, F. A.; Coutinho, A. L.; Cottereau, R.

    2013-05-01

    and breakup coefficients, fractal dimension, and primary particle diameter), the first three of which are particularly difficult to measure experi- mentally. Several authors have tried to observe the influence of these parameters on some quantities of interest in flocculation experiments, by modifying the values of the parameters one by one around reference values. This type of local sensitiv- ity analysis provides some insight but is not sufficient when the parameters vary over several orders of magnitude. We propose in this presentation to describe a global sensitivity analysis of this flocculation model. The input distributions for the parameters are chosen based on an extensive data set from the literature. The global sensitivity analysis is performed using the Sobol and FAST methods and aims at observing the influence of the parameters on two quantities of inter- est: (i) the equilibrium diameter of the flocs, that can be computed analytically, and (ii) a maximum floc size in a 1D tidal forcing experiment.

  4. The analysis sensitivity to tropical winds from the Global Weather Experiment

    NASA Technical Reports Server (NTRS)

    Paegle, J.; Paegle, J. N.; Baker, W. E.

    1986-01-01

    The global scale divergent and rotational flow components of the Global Weather Experiment (GWE) are diagnosed from three different analyses of the data. The rotational flow shows closer agreement between the analyses than does the divergent flow. Although the major outflow and inflow centers are similarly placed in all analyses, the global kinetic energy of the divergent wind varies by about a factor of 2 between different analyses while the global kinetic energy of the rotational wind varies by only about 10 percent between the analyses. A series of real data assimilation experiments has been performed with the GLA general circulation model using different amounts of tropical wind data during the First Special Observing Period of the Global Weather Experiment. In exeriment 1, all available tropical wind data were used; in the second experiment, tropical wind data were suppressed; while, in the third and fourth experiments, only tropical wind data with westerly and easterly components, respectively, were assimilated. The rotational wind appears to be more sensitive to the presence or absence of tropical wind data than the divergent wind. It appears that the model, given only extratropical observations, generates excessively strong upper tropospheric westerlies. These biases are sufficiently pronounced to amplify the globally integrated rotational flow kinetic energy by about 10 percent and the global divergent flow kinetic energy by about a factor of 2. Including only easterly wind data in the tropics is more effective in controlling the model error than including only westerly wind data. This conclusion is especially noteworthy because approximately twice as many upper tropospheric westerly winds were available in these cases as easterly winds.

  5. A Peep into the Uncertainty-Complexity-Relevance Modeling Trilemma through Global Sensitivity and Uncertainty Analysis

    NASA Astrophysics Data System (ADS)

    Munoz-Carpena, R.; Muller, S. J.; Chu, M.; Kiker, G. A.; Perz, S. G.

    2014-12-01

    Model Model complexity resulting from the need to integrate environmental system components cannot be understated. In particular, additional emphasis is urgently needed on rational approaches to guide decision making through uncertainties surrounding the integrated system across decision-relevant scales. However, in spite of the difficulties that the consideration of modeling uncertainty represent for the decision process, it should not be avoided or the value and science behind the models will be undermined. These two issues; i.e., the need for coupled models that can answer the pertinent questions and the need for models that do so with sufficient certainty, are the key indicators of a model's relevance. Model relevance is inextricably linked with model complexity. Although model complexity has advanced greatly in recent years there has been little work to rigorously characterize the threshold of relevance in integrated and complex models. Formally assessing the relevance of the model in the face of increasing complexity would be valuable because there is growing unease among developers and users of complex models about the cumulative effects of various sources of uncertainty on model outputs. In particular, this issue has prompted doubt over whether the considerable effort going into further elaborating complex models will in fact yield the expected payback. New approaches have been proposed recently to evaluate the uncertainty-complexity-relevance modeling trilemma (Muller, Muñoz-Carpena and Kiker, 2011) by incorporating state-of-the-art global sensitivity and uncertainty analysis (GSA/UA) in every step of the model development so as to quantify not only the uncertainty introduced by the addition of new environmental components, but the effect that these new components have over existing components (interactions, non-linear responses). Outputs from the analysis can also be used to quantify system resilience (stability, alternative states, thresholds or tipping

  6. A new framework for comprehensive, robust, and efficient global sensitivity analysis: 1. Theory

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin V.

    2016-01-01

    Computer simulation models are continually growing in complexity with increasingly more factors to be identified. Sensitivity Analysis (SA) provides an essential means for understanding the role and importance of these factors in producing model responses. However, conventional approaches to SA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we present a new and general sensitivity analysis framework (called VARS), based on an analogy to "variogram analysis," that provides an intuitive and comprehensive characterization of sensitivity across the full spectrum of scales in the factor space. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices can be computed as by-products of the VARS framework. Synthetic functions that resemble actual model response surfaces are used to illustrate the concepts, and show VARS to be as much as two orders of magnitude more computationally efficient than the state-of-the-art Sobol approach. In a companion paper, we propose a practical implementation strategy, and demonstrate the effectiveness, efficiency, and reliability (robustness) of the VARS framework on real-data case studies.

  7. Exploring the impact of forcing error characteristics on physically based snow simulations within a global sensitivity analysis framework

    NASA Astrophysics Data System (ADS)

    Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.

    2015-07-01

    Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.

  8. Parametric uncertainty and global sensitivity analysis in a model of the carotid bifurcation: Identification and ranking of most sensitive model parameters.

    PubMed

    Gul, R; Bernhard, S

    2015-11-01

    In computational cardiovascular models, parameters are one of major sources of uncertainty, which make the models unreliable and less predictive. In order to achieve predictive models that allow the investigation of the cardiovascular diseases, sensitivity analysis (SA) can be used to quantify and reduce the uncertainty in outputs (pressure and flow) caused by input (electrical and structural) model parameters. In the current study, three variance based global sensitivity analysis (GSA) methods; Sobol, FAST and a sparse grid stochastic collocation technique based on the Smolyak algorithm were applied on a lumped parameter model of carotid bifurcation. Sensitivity analysis was carried out to identify and rank most sensitive parameters as well as to fix less sensitive parameters at their nominal values (factor fixing). In this context, network location and temporal dependent sensitivities were also discussed to identify optimal measurement locations in carotid bifurcation and optimal temporal regions for each parameter in the pressure and flow waves, respectively. Results show that, for both pressure and flow, flow resistance (R), diameter (d) and length of the vessel (l) are sensitive within right common carotid (RCC), right internal carotid (RIC) and right external carotid (REC) arteries, while compliance of the vessels (C) and blood inertia (L) are sensitive only at RCC. Moreover, Young's modulus (E) and wall thickness (h) exhibit less sensitivities on pressure and flow at all locations of carotid bifurcation. Results of network location and temporal variabilities revealed that most of sensitivity was found in common time regions i.e. early systole, peak systole and end systole. PMID:26367184

  9. Global stability and sensitivity analysis of boundary-layer flows past a hemispherical roughness element

    NASA Astrophysics Data System (ADS)

    Citro, V.; Giannetti, F.; Luchini, P.; Auteri, F.

    2015-08-01

    We study the full three-dimensional instability mechanism past a hemispherical roughness element immersed in a laminar Blasius boundary layer. The inherent three-dimensional flow pattern beyond the Hopf bifurcation is characterized by coherent vortical structures usually called hairpin vortices. Direct numerical simulation results are used to analyze the formation and the shedding of hairpin vortices inside the shear layer. The first bifurcation is investigated by global-stability tools. We show the spatial structure of the linear direct and adjoint global eigenmodes of the linearized Navier-Stokes equations and use the structural-sensitivity field to locate the region where the instability mechanism acts. The core of this instability is found to be symmetric and spatially localized in the region immediately downstream of the roughness element. The effect of the variation of the ratio between the obstacle height k and the boundary layer thickness δk ∗ is also considered. The resulting bifurcation scenario is found to agree well with previous experimental investigations. A limit regime for k / δk ∗ < 1 . 5 is attained where the critical Reynolds number is almost constant, Rek ≈ 580. This result indicates that, in these conditions, the only important parameter identifying the bifurcation is the unperturbed (i.e., without the roughness element) velocity slope at the wall.

  10. Global sensitivity analysis of the joint kinematics during gait to the parameters of a lower limb multi-body model.

    PubMed

    El Habachi, Aimad; Moissenet, Florent; Duprey, Sonia; Cheze, Laurence; Dumas, Raphaël

    2015-07-01

    Sensitivity analysis is a typical part of biomechanical models evaluation. For lower limb multi-body models, sensitivity analyses have been mainly performed on musculoskeletal parameters, more rarely on the parameters of the joint models. This study deals with a global sensitivity analysis achieved on a lower limb multi-body model that introduces anatomical constraints at the ankle, tibiofemoral, and patellofemoral joints. The aim of the study was to take into account the uncertainty of parameters (e.g. 2.5 cm on the positions of the skin markers embedded in the segments, 5° on the orientation of hinge axis, 2.5 mm on the origin and insertion of ligaments) using statistical distributions and propagate it through a multi-body optimisation method used for the computation of joint kinematics from skin markers during gait. This will allow us to identify the most influential parameters on the minimum of the objective function of the multi-body optimisation (i.e. the sum of the squared distances between measured and model-determined skin marker positions) and on the joint angles and displacements. To quantify this influence, a Fourier-based algorithm of global sensitivity analysis coupled with a Latin hypercube sampling is used. This sensitivity analysis shows that some parameters of the motor constraints, that is to say the distances between measured and model-determined skin marker positions, and the kinematic constraints are highly influencing the joint kinematics obtained from the lower limb multi-body model, for example, positions of the skin markers embedded in the shank and pelvis, parameters of the patellofemoral hinge axis, and parameters of the ankle and tibiofemoral ligaments. The resulting standard deviations on the joint angles and displacements reach 36° and 12 mm. Therefore, personalisation, customisation or identification of these most sensitive parameters of the lower limb multi-body models may be considered as essential. PMID:25783762

  11. The global burden of disease in 1990: summary results, sensitivity analysis and future directions.

    PubMed Central

    Murray, C. J.; Lopez, A. D.; Jamison, D. T.

    1994-01-01

    A basic requirement for evaluating the cost-effectiveness of health interventions is a comprehensive assessment of the amount of ill health (premature death and disability) attributable to specific diseases and injuries. A new indicator, the number of disability-adjusted life years (DALYs), was developed to assess the burden of disease and injury in 1990 for over 100 causes by age, sex and region. The DALY concept provides an integrative, comprehensive methodology to capture the entire amount of ill health which will, on average, be incurred during one's lifetime because of new cases of disease and injury in 1990. It differs in many respects from previous attempts at global and regional health situation assessment which have typically been much less comprehensive in scope, less detailed, and limited to a handful of causes. This paper summarizes the DALY estimates for 1990 by cause, age, sex and region. For the first time, those responsible for deciding priorities in the health sector have access to a disaggregated set of estimates which, in addition to facilitating cost-effectiveness analysis, can be used to monitor global and regional health progress for over a hundred conditions. The paper also shows how the estimates depend on particular values of the parameters involved in the calculation. PMID:8062404

  12. Technical note: Method of Morris effectively reduces the computational demands of global sensitivity analysis for distributed watershed models

    NASA Astrophysics Data System (ADS)

    Herman, J. D.; Kollat, J. B.; Reed, P. M.; Wagener, T.

    2013-04-01

    The increase in spatially distributed hydrologic modeling warrants a corresponding increase in diagnostic methods capable of analyzing complex models with large numbers of parameters. Sobol' sensitivity analysis has proven to be a valuable tool for diagnostic analyses of hydrologic models. However, for many spatially distributed models, the Sobol' method requires a prohibitive number of model evaluations to reliably decompose output variance across the full set of parameters. We investigate the potential of the method of Morris, a screening-based sensitivity approach, to provide results sufficiently similar to those of the Sobol' method at a greatly reduced computational expense. The methods are benchmarked on the Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM) model over a six-month period in the Blue River Watershed, Oklahoma, USA. The Sobol' method required over six million model evaluations to ensure reliable sensitivity indices, corresponding to more than 30 000 computing hours and roughly 180 gigabytes of storage space. We find that the method of Morris is able to correctly identify sensitive and insensitive parameters with 300 times fewer model evaluations, requiring only 100 computing hours and 1 gigabyte of storage space. Method of Morris proves to be a promising diagnostic approach for global sensitivity analysis of highly parameterized, spatially distributed hydrologic models.

  13. Technical Note: Method of Morris effectively reduces the computational demands of global sensitivity analysis for distributed watershed models

    NASA Astrophysics Data System (ADS)

    Herman, J. D.; Kollat, J. B.; Reed, P. M.; Wagener, T.

    2013-07-01

    The increase in spatially distributed hydrologic modeling warrants a corresponding increase in diagnostic methods capable of analyzing complex models with large numbers of parameters. Sobol' sensitivity analysis has proven to be a valuable tool for diagnostic analyses of hydrologic models. However, for many spatially distributed models, the Sobol' method requires a prohibitive number of model evaluations to reliably decompose output variance across the full set of parameters. We investigate the potential of the method of Morris, a screening-based sensitivity approach, to provide results sufficiently similar to those of the Sobol' method at a greatly reduced computational expense. The methods are benchmarked on the Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM) over a six-month period in the Blue River watershed, Oklahoma, USA. The Sobol' method required over six million model evaluations to ensure reliable sensitivity indices, corresponding to more than 30 000 computing hours and roughly 180 gigabytes of storage space. We find that the method of Morris is able to correctly screen the most and least sensitive parameters with 300 times fewer model evaluations, requiring only 100 computing hours and 1 gigabyte of storage space. The method of Morris proves to be a promising diagnostic approach for global sensitivity analysis of highly parameterized, spatially distributed hydrologic models.

  14. The application of Global Sensitivity Analysis to quantify the dominant input factors for hydraulic model simulations

    NASA Astrophysics Data System (ADS)

    Savage, James; Pianosi, Francesca; Bates, Paul; Freer, Jim; Wagener, Thorsten

    2015-04-01

    Predicting flood inundation extents using hydraulic models is subject to a number of critical uncertainties. For a specific event, these uncertainties are known to have a large influence on model outputs and any subsequent analyses made by risk managers. Hydraulic modellers often approach such problems by applying uncertainty analysis techniques such as the Generalised Likelihood Uncertainty Estimation (GLUE) methodology. However, these methods do not allow one to attribute which source of uncertainty has the most influence on the various model outputs that inform flood risk decision making. Another issue facing modellers is the amount of computational resource that is available to spend on modelling flood inundations that are 'fit for purpose' to the modelling objectives. Therefore a balance needs to be struck between computation time, realism and spatial resolution, and effectively characterising the uncertainty spread of predictions (for example from boundary conditions and model parameterisations). However, it is not fully understood how much of an impact each factor has on model performance, for example how much influence changing the spatial resolution of a model has on inundation predictions in comparison to other uncertainties inherent in the modelling process. Furthermore, when resampling fine scale topographic data in the form of a Digital Elevation Model (DEM) to coarser resolutions, there are a number of possible coarser DEMs that can be produced. Deciding which DEM is then chosen to represent the surface elevations in the model could also influence model performance. In this study we model a flood event using the hydraulic model LISFLOOD-FP and apply Sobol' Sensitivity Analysis to estimate which input factor, among the uncertainty in model boundary conditions, uncertain model parameters, the spatial resolution of the DEM and the choice of resampled DEM, have the most influence on a range of model outputs. These outputs include whole domain maximum

  15. GLSENS: A Generalized Extension of LSENS Including Global Reactions and Added Sensitivity Analysis for the Perfectly Stirred Reactor

    NASA Technical Reports Server (NTRS)

    Bittker, David A.

    1996-01-01

    A generalized version of the NASA Lewis general kinetics code, LSENS, is described. The new code allows the use of global reactions as well as molecular processes in a chemical mechanism. The code also incorporates the capability of performing sensitivity analysis calculations for a perfectly stirred reactor rapidly and conveniently at the same time that the main kinetics calculations are being done. The GLSENS code has been extensively tested and has been found to be accurate and efficient. Nine example problems are presented and complete user instructions are given for the new capabilities. This report is to be used in conjunction with the documentation for the original LSENS code.

  16. Mars approach for global sensitivity analysis of differential equation models with applications to dynamics of influenza infection.

    PubMed

    Lee, Yeonok; Wu, Hulin

    2012-01-01

    Differential equation models are widely used for the study of natural phenomena in many fields. The study usually involves unknown factors such as initial conditions and/or parameters. It is important to investigate the impact of unknown factors (parameters and initial conditions) on model outputs in order to better understand the system the model represents. Apportioning the uncertainty (variation) of output variables of a model according to the input factors is referred to as sensitivity analysis. In this paper, we focus on the global sensitivity analysis of ordinary differential equation (ODE) models over a time period using the multivariate adaptive regression spline (MARS) as a meta model based on the concept of the variance of conditional expectation (VCE). We suggest to evaluate the VCE analytically using the MARS model structure of univariate tensor-product functions which is more computationally efficient. Our simulation studies show that the MARS model approach performs very well and helps to significantly reduce the computational cost. We present an application example of sensitivity analysis of ODE models for influenza infection to further illustrate the usefulness of the proposed method. PMID:21656089

  17. Radionuclide migration through fractured rock for arbitrary-length decay chain: Analytical solution and global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Shahkarami, Pirouz; Liu, Longcheng; Moreno, Luis; Neretnieks, Ivars

    2015-01-01

    This study presents an analytical approach to simulate nuclide migration through a channel in a fracture accounting for an arbitrary-length decay chain. The nuclides are retarded as they diffuse in the porous rock matrix and stagnant zones in the fracture. The Laplace transform and similarity transform techniques are applied to solve the model. The analytical solution to the nuclide concentrations at the fracture outlet is governed by nine parameters representing different mechanisms acting on nuclide transport through a fracture, including diffusion into the rock matrices, diffusion into the stagnant water zone, chain decay and hydrodynamic dispersion. Furthermore, to assess how sensitive the results are to parameter uncertainties, the Sobol method is applied in variance-based global sensitivity analyses of the model output. The Sobol indices show how uncertainty in the model output is apportioned to the uncertainty in the model input. This method takes into account both direct effects and interaction effects between input parameters. The simulation results suggest that in the case of pulse injections, ignoring the effect of a stagnant water zone can lead to significant errors in the time of first arrival and the peak value of the nuclides. Likewise, neglecting the parent and modeling its daughter as a single stable species can result in a significant overestimation of the peak value of the daughter nuclide. It is also found that as the dispersion increases, the early arrival time and the peak time of the daughter decrease while the peak value increases. More importantly, the global sensitivity analysis reveals that for time periods greater than a few thousand years, the uncertainty of the model output is more sensitive to the values of the individual parameters than to the interaction between them. Moreover, if one tries to evaluate the true values of the input parameters at the same cost and effort, the determination of priorities should follow a certain

  18. Switch of sensitivity dynamics revealed with DyGloSA toolbox for dynamical global sensitivity analysis as an early warning for system's critical transition.

    PubMed

    Baumuratova, Tatiana; Dobre, Simona; Bastogne, Thierry; Sauter, Thomas

    2013-01-01

    Systems with bifurcations may experience abrupt irreversible and often unwanted shifts in their performance, called critical transitions. For many systems like climate, economy, ecosystems it is highly desirable to identify indicators serving as early warnings of such regime shifts. Several statistical measures were recently proposed as early warnings of critical transitions including increased variance, autocorrelation and skewness of experimental or model-generated data. The lack of automatized tool for model-based prediction of critical transitions led to designing DyGloSA - a MATLAB toolbox for dynamical global parameter sensitivity analysis (GPSA) of ordinary differential equations models. We suggest that the switch in dynamics of parameter sensitivities revealed by our toolbox is an early warning that a system is approaching a critical transition. We illustrate the efficiency of our toolbox by analyzing several models with bifurcations and predicting the time periods when systems can still avoid going to a critical transition by manipulating certain parameter values, which is not detectable with the existing SA techniques. DyGloSA is based on the SBToolbox2 and contains functions, which compute dynamically the global sensitivity indices of the system by applying four main GPSA methods: eFAST, Sobol's ANOVA, PRCC and WALS. It includes parallelized versions of the functions enabling significant reduction of the computational time (up to 12 times). DyGloSA is freely available as a set of MATLAB scripts at http://bio.uni.lu/systems_biology/software/dyglosa. It requires installation of MATLAB (versions R2008b or later) and the Systems Biology Toolbox2 available at www.sbtoolbox2.org. DyGloSA can be run on Windows and Linux systems, -32 and -64 bits. PMID:24367574

  19. Reducing Production Basis Risk through Rainfall Intensity Frequency (RIF) Indexes: Global Sensitivity Analysis' Implication on Policy Design

    NASA Astrophysics Data System (ADS)

    Muneepeerakul, Chitsomanus; Huffaker, Ray; Munoz-Carpena, Rafael

    2016-04-01

    The weather index insurance promises financial resilience to farmers struck by harsh weather conditions with swift compensation at affordable premium thanks to its minimal adverse selection and moral hazard. Despite these advantages, the very nature of indexing causes the presence of "production basis risk" that the selected weather indexes and their thresholds do not correspond to actual damages. To reduce basis risk without additional data collection cost, we propose the use of rain intensity and frequency as indexes as it could offer better protection at the lower premium by avoiding basis risk-strike trade-off inherent in the total rainfall index. We present empirical evidences and modeling results that even under the similar cumulative rainfall and temperature environment, yield can significantly differ especially for drought sensitive crops. We further show that deriving the trigger level and payoff function from regression between historical yield and total rainfall data may pose significant basis risk owing to their non-unique relationship in the insured range of rainfall. Lastly, we discuss the design of index insurance in terms of contract specifications based on the results from global sensitivity analysis.

  20. Making sense of global sensitivity analyses

    NASA Astrophysics Data System (ADS)

    Wainwright, Haruko M.; Finsterle, Stefan; Jung, Yoojin; Zhou, Quanlin; Birkholzer, Jens T.

    2014-04-01

    This study presents improved understanding of sensitivity analysis methods through a comparison of the local sensitivity and two global sensitivity analysis methods: the Morris and Sobol‧/Saltelli methods. We re-interpret the variance-based sensitivity indices from the Sobol‧/Saltelli method as difference-based measures. It suggests that the difference-based local and Morris methods provide the effect of each parameter including its interaction with others, similar to the total sensitivity index from the Sobol‧/Saltelli method. We also develop an alternative approximation method to efficiently compute the Sobol‧ index, using one-dimensional fitting of system responses from a Monte-Carlo simulation. For illustration, we conduct a sensitivity analysis of pressure propagation induced by fluid injection and leakage in a reservoir-aquitard-aquifer system. The results show that the three methods provide consistent parameter importance rankings in this system. Our study also reveals that the three methods can provide additional information to improve system understanding.

  1. Adaptive surrogate modeling by ANOVA and sparse polynomial dimensional decomposition for global sensitivity analysis in fluid simulation

    NASA Astrophysics Data System (ADS)

    Tang, Kunkun; Congedo, Pietro M.; Abgrall, Rémi

    2016-06-01

    The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.

  2. Sensitivity Analysis in Engineering

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M. (Compiler); Haftka, Raphael T. (Compiler)

    1987-01-01

    The symposium proceedings presented focused primarily on sensitivity analysis of structural response. However, the first session, entitled, General and Multidisciplinary Sensitivity, focused on areas such as physics, chemistry, controls, and aerodynamics. The other four sessions were concerned with the sensitivity of structural systems modeled by finite elements. Session 2 dealt with Static Sensitivity Analysis and Applications; Session 3 with Eigenproblem Sensitivity Methods; Session 4 with Transient Sensitivity Analysis; and Session 5 with Shape Sensitivity Analysis.

  3. Global sensitivity analysis of a SWAT model: comparison of the variance-based and moment-independent approaches

    NASA Astrophysics Data System (ADS)

    Khorashadi Zadeh, Farkhondeh; Sarrazin, Fanny; Nossent, Jiri; Pianosi, Francesca; van Griensven, Ann; Wagener, Thorsten; Bauwens, Willy

    2015-04-01

    Uncertainty in parameters is a well-known reason of model output uncertainty which, undermines model reliability and restricts model application. A large number of parameters, in addition to the lack of data, limits calibration efficiency and also leads to higher parameter uncertainty. Global Sensitivity Analysis (GSA) is a set of mathematical techniques that provides quantitative information about the contribution of different sources of uncertainties (e.g. model parameters) to the model output uncertainty. Therefore, identifying influential and non-influential parameters using GSA can improve model calibration efficiency and consequently reduce model uncertainty. In this paper, moment-independent density-based GSA methods that consider the entire model output distribution - i.e. Probability Density Function (PDF) or Cumulative Distribution Function (CDF) - are compared with the widely-used variance-based method and their differences are discussed. Moreover, the effect of model output definition on parameter ranking results is investigated using Nash-Sutcliffe Efficiency (NSE) and model bias as example outputs. To this end, 26 flow parameters of a SWAT model of the River Zenne (Belgium) are analysed. In order to assess the robustness of the sensitivity indices, bootstrapping is applied and 95% confidence intervals are estimated. The results show that, although the variance-based method is easy to implement and interpret, it provides wider confidence intervals, especially for non-influential parameters, compared to the density-based methods. Therefore, density-based methods may be a useful complement to variance-based methods for identifying non-influential parameters.

  4. Using time-varying global sensitivity analysis to understand the importance of different uncertainty sources in hydrological modelling

    NASA Astrophysics Data System (ADS)

    Pianosi, Francesca; Wagener, Thorsten

    2016-04-01

    Simulations from environmental models are affected by potentially large uncertainties stemming from various sources, including model parameters and observational uncertainty in the input/output data. Understanding the relative importance of such sources of uncertainty is essential to support model calibration, validation and diagnostic evaluation, and to prioritize efforts for uncertainty reduction. Global Sensitivity Analysis (GSA) provides the theoretical framework and the numerical tools to gain this understanding. However, in traditional applications of GSA, model outputs are an aggregation of the full set of simulated variables. This aggregation of propagated uncertainties prior to GSA may lead to a significant loss of information and may cover up local behaviour that could be of great interest. In this work, we propose a time-varying version of a recently developed density-based GSA method, called PAWN, as a viable option to reduce this loss of information. We apply our approach to a medium-complexity hydrological model in order to address two questions: [1] Can we distinguish between the relative importance of parameter uncertainty versus data uncertainty in time? [2] Do these influences change in catchments with different characteristics? The results present the first quantitative investigation on the relative importance of parameter and data uncertainty across time. They also provide a demonstration of the value of time-varying GSA to investigate the propagation of uncertainty through numerical models and therefore guide additional data collection needs and model calibration/assessment.

  5. Using global sensitivity analysis to evaluate the uncertainties of future shoreline changes under the Bruun rule assumption

    NASA Astrophysics Data System (ADS)

    Le Cozannet, Gonéri; Oliveros, Carlos; Castelle, Bruno; Garcin, Manuel; Idier, Déborah; Pedreros, Rodrigo; Rohmer, Jeremy

    2016-04-01

    Future sandy shoreline changes are often assed by summing the contributions of longshore and cross-shore effects. In such approaches, a contribution of sea-level rise can be incorporated by adding a supplementary term based on the Bruun rule. Here, our objective is to identify where and when the use of the Bruun rule can be (in)validated, in the case of wave-exposed beaches with gentle slopes. We first provide shoreline change scenarios that account for all uncertain hydrosedimentary processes affecting the idealized low- and high-energy coasts described by Stive (2004)[Stive, M. J. F. 2004, How important is global warming for coastal erosion? an editorial comment, Climatic Change, vol. 64, n 12, doi:10.1023/B:CLIM.0000024785.91858. ISSN 0165-0009]. Then, we generate shoreline change scenarios based on probabilistic sea-level rise projections based on IPCC. For scenario RCP 6.0 and 8.5 and in the absence of coastal defenses, the model predicts an observable shift toward generalized beach erosion by the middle of the 21st century. On the contrary, the model predictions are unlikely to differ from the current situation in case of scenario RCP 2.6. To get insight into the relative importance of each source of uncertainties, we quantify each contributions to the variance of the model outcome using a global sensitivity analysis. This analysis shows that by the end of the 21st century, a large part of shoreline change uncertainties are due to the climate change scenario if all anthropogenic greenhousegas emission scenarios are considered equiprobable. To conclude, the analysis shows that under the assumptions above, (in)validating the Bruun rule should be straightforward during the second half of the 21st century and for the RCP 8.5 scenario. Conversely, for RCP 2.6, the noise in shoreline change evolution should continue dominating the signal due to the Bruun effect. This last conclusion can be interpreted as an important potential benefit of climate change mitigation.

  6. Global sensitivity analysis of the climate-vegetation system to astronomical forcing: an emulator-based approach

    NASA Astrophysics Data System (ADS)

    Bounceur, N.; Crucifix, M.; Wilkinson, R. D.

    2015-05-01

    A global sensitivity analysis is performed to describe the effects of astronomical forcing on the climate-vegetation system simulated by the model of intermediate complexity LOVECLIM in interglacial conditions. The methodology relies on the estimation of sensitivity measures, using a Gaussian process emulator as a fast surrogate of the climate model, calibrated on a set of well-chosen experiments. The outputs considered are the annual mean temperature and precipitation and the growing degree days (GDD). The experiments were run on two distinct land surface schemes to estimate the importance of vegetation feedbacks on climate variance. This analysis provides a spatial description of the variance due to the factors and their combinations, in the form of "fingerprints" obtained from the covariance indices. The results are broadly consistent with the current under-standing of Earth's climate response to the astronomical forcing. In particular, precession and obliquity are found to contribute in LOVECLIM equally to GDD in the Northern Hemisphere, and the effect of obliquity on the response of Southern Hemisphere temperature dominates precession effects. Precession dominates precipitation changes in subtropical areas. Compared to standard approaches based on a small number of simulations, the methodology presented here allows us to identify more systematically regions susceptible to experiencing rapid climate change in response to the smooth astronomical forcing change. In particular, we find that using interactive vegetation significantly enhances the expected rates of climate change, specifically in the Sahel (up to 50% precipitation change in 1000 years) and in the Canadian Arctic region (up to 3° in 1000 years). None of the tested astronomical configurations were found to induce multiple steady states, but, at low obliquity, we observed the development of an oscillatory pattern that has already been reported in LOVECLIM. Although the mathematics of the analysis are

  7. Parametric Sensitivity Analysis of Precipitation at Global and Local Scales in the Community Atmosphere Model CAM5

    SciTech Connect

    Qian, Yun; Yan, Huiping; Hou, Zhangshuan; Johannesson, G.; Klein, Stephen A.; Lucas, Donald; Neale, Richard; Rasch, Philip J.; Swiler, Laura P.; Tannahill, John; Wang, Hailong; Wang, Minghuai; Zhao, Chun

    2015-04-10

    We investigate the sensitivity of precipitation characteristics (mean, extreme and diurnal cycle) to a set of uncertain parameters that influence the qualitative and quantitative behavior of the cloud and aerosol processes in the Community Atmosphere Model (CAM5). We adopt both the Latin hypercube and quasi-Monte Carlo sampling approaches to effectively explore the high-dimensional parameter space and then conduct two large sets of simulations. One set consists of 1100 simulations (cloud ensemble) perturbing 22 parameters related to cloud physics and convection, and the other set consists of 256 simulations (aerosol ensemble) focusing on 16 parameters related to aerosols and cloud microphysics. Results show that for the 22 parameters perturbed in the cloud ensemble, the six having the greatest influences on the global mean precipitation are identified, three of which (related to the deep convection scheme) are the primary contributors to the total variance of the phase and amplitude of the precipitation diurnal cycle over land. The extreme precipitation characteristics are sensitive to a fewer number of parameters. The precipitation does not always respond monotonically to parameter change. The influence of individual parameters does not depend on the sampling approaches or concomitant parameters selected. Generally the GLM is able to explain more of the parametric sensitivity of global precipitation than local or regional features. The total explained variance for precipitation is primarily due to contributions from the individual parameters (75-90% in total). The total variance shows a significant seasonal variability in the mid-latitude continental regions, but very small in tropical continental regions.

  8. Parametric Sensitivity Analysis of Precipitation at Global and Local Scales in the Community Atmosphere Model CAM5

    DOE PAGESBeta

    Qian, Yun; Yan, Huiping; Hou, Zhangshuan; Johannesson, G.; Klein, Stephen A.; Lucas, Donald; Neale, Richard; Rasch, Philip J.; Swiler, Laura P.; Tannahill, John; et al

    2015-04-10

    We investigate the sensitivity of precipitation characteristics (mean, extreme and diurnal cycle) to a set of uncertain parameters that influence the qualitative and quantitative behavior of the cloud and aerosol processes in the Community Atmosphere Model (CAM5). We adopt both the Latin hypercube and quasi-Monte Carlo sampling approaches to effectively explore the high-dimensional parameter space and then conduct two large sets of simulations. One set consists of 1100 simulations (cloud ensemble) perturbing 22 parameters related to cloud physics and convection, and the other set consists of 256 simulations (aerosol ensemble) focusing on 16 parameters related to aerosols and cloud microphysics.more » Results show that for the 22 parameters perturbed in the cloud ensemble, the six having the greatest influences on the global mean precipitation are identified, three of which (related to the deep convection scheme) are the primary contributors to the total variance of the phase and amplitude of the precipitation diurnal cycle over land. The extreme precipitation characteristics are sensitive to a fewer number of parameters. The precipitation does not always respond monotonically to parameter change. The influence of individual parameters does not depend on the sampling approaches or concomitant parameters selected. Generally the GLM is able to explain more of the parametric sensitivity of global precipitation than local or regional features. The total explained variance for precipitation is primarily due to contributions from the individual parameters (75-90% in total). The total variance shows a significant seasonal variability in the mid-latitude continental regions, but very small in tropical continental regions.« less

  9. Comparison of a one-at-a-step and variance-based global sensitivity analysis applied to a parsimonious urban hydrological model

    NASA Astrophysics Data System (ADS)

    Coutu, S.

    2014-12-01

    A sensitivity analysis was conducted on an existing parsimonious model aiming to reproduce flow in engineered urban catchments and sewer networks. The model is characterized by his parsimonious feature and is limited to seven calibration parameters. The objective of this study is to demonstrate how different levels of sensitivity analysis can have an influence on the interpretation of input parameter relevance in urban hydrology, even for light structure models. In this perspective, we applied a one-at-a-time (OAT) sensitivity analysis (SA) as well as a variance-based global and model independent method; the calculation of Sobol indexes. Sobol's first and total effect indexes were estimated using a Monte-Carlo approach. We present evidences of the irrelevance of calculating Sobol's second order indexes when uncertainty on index estimation is too high. Sobol's method results showed that two parameters drive model performance: the subsurface discharge rate and the root zone drainage coefficient (Clapp exponent). Interestingly, the surface discharge rate responsible flow in impervious area has no significant relevance, contrarily to what was expected considering only the one-at-a-time sensitivity analysis. This last statement is clearly not straightforward. It highlights the utility of carrying variance-based sensitivity analysis in the domain of urban hydrology, even when using a parsimonious model, in order to prevent misunderstandings in the system dynamics and consequent management mistakes.

  10. A Global Sensitivity Analysis of Arctic Sea Ice to Parameter Uncertainty in the CICE v5.1 Sea Ice Model

    NASA Astrophysics Data System (ADS)

    Urrego-Blanco, J. R.; Urban, N. M.; Hunke, E. C.

    2015-12-01

    Sea ice and climate models are key to understand and predict ongoing changes in the Arctic climate system, particularly sharp reductions in sea ice area and volume. There are, however, uncertainties arising from multiple sources, including parametric uncertainty, which affect model output. The Los Alamos Sea Ice Model (CICE) includes complex parameterizations of sea ice processes with a large number of parameters for which accurate values are still not well established. To enhance the credibility of sea ice predictions, it is necessary to understand the sensitivity of model results to uncertainties in input parameters. In this work we conduct a variance-based global sensitivity analysis of sea ice extent, area, and volume. This approach allows full exploration of our 40-dimensional parametric space, and the model sensitivity is quantified in terms of main and total effects indices. The global sensitivity analysis does not require assumptions of additivity or linearity, implicit in the most commonly used one-at-a-time sensitivity analyses. A Gaussian process emulator of the sea ice model is built and then used to generate the large number of samples necessary to calculate the sensitivity indices, at a much lower computational cost than using the full model. The sensitivity indices are used to rank the most important model parameters affecting Arctic sea ice extent, area, and volume. The most important parameters contributing to the model variance include snow conductivity and grain size, and the time-scale for drainage of melt ponds. Other important parameters include the thickness of the ice radiative scattering layer, ice density, and the ice-ocean drag coefficient. We discuss physical processes that explain variations in simulated sea ice variables in terms of the first order parameter effects and the most important interactions among them.

  11. Model-based decision analysis of remedial alternatives using info-gap theory and Agent-Based Analysis of Global Uncertainty and Sensitivity (ABAGUS)

    NASA Astrophysics Data System (ADS)

    Harp, D.; Vesselinov, V. V.

    2011-12-01

    A newly developed methodology to model-based decision analysis is presented. The methodology incorporates a sampling approach, referred to as Agent-Based Analysis of Global Uncertainty and Sensitivity (ABAGUS; Harp & Vesselinov; 2011), that efficiently collects sets of acceptable solutions (i.e. acceptable model parameter sets) for different levels of a model performance metric representing the consistency of model predictions to observations. In this case, the performance metric is based on model residuals (i.e. discrepancies between observations and simulations). ABAGUS collects acceptable solutions from a discretized parameter space and stores them in a KD-tree for efficient retrieval. The parameter space domain (parameter minimum/maximum ranges) and discretization are predefined. On subsequent visits to collected locations, agents are provided with a modified value of the performance metric, and the model solution is not recalculated. The modified values of the performance metric sculpt the response surface (convexities become concavities), repulsing agents from collected regions. This promotes global exploration of the parameter space and discourages reinvestigation of regions of previously collected acceptable solutions. The resulting sets of acceptable solutions are formulated into a decision analysis using concepts from info-gap theory (Ben-Haim, 2006). Using info-gap theory, the decision robustness and opportuneness are quantified, providing measures of the immunity to failure and windfall, respectively, of alternative decisions. The approach is intended for cases where the information is extremely limited, resulting in non-probabilistic uncertainties concerning model properties such as boundary and initial conditions, model parameters, conceptual model elements, etc. The information provided by this analysis is weaker than the information provided by probabilistic decision analyses (i.e. posterior parameter distributions are not produced), however, this

  12. Sensitivity analysis of a sediment dynamics model applied in a Mediterranean river basin: global change and management implications.

    PubMed

    Sánchez-Canales, M; López-Benito, A; Acuña, V; Ziv, G; Hamel, P; Chaplin-Kramer, R; Elorza, F J

    2015-01-01

    Climate change and land-use change are major factors influencing sediment dynamics. Models can be used to better understand sediment production and retention by the landscape, although their interpretation is limited by large uncertainties, including model parameter uncertainties. The uncertainties related to parameter selection may be significant and need to be quantified to improve model interpretation for watershed management. In this study, we performed a sensitivity analysis of the InVEST (Integrated Valuation of Environmental Services and Tradeoffs) sediment retention model in order to determine which model parameters had the greatest influence on model outputs, and therefore require special attention during calibration. The estimation of the sediment loads in this model is based on the Universal Soil Loss Equation (USLE). The sensitivity analysis was performed in the Llobregat basin (NE Iberian Peninsula) for exported and retained sediment, which support two different ecosystem service benefits (avoided reservoir sedimentation and improved water quality). Our analysis identified the model parameters related to the natural environment as the most influential for sediment export and retention. Accordingly, small changes in variables such as the magnitude and frequency of extreme rainfall events could cause major changes in sediment dynamics, demonstrating the sensitivity of these dynamics to climate change in Mediterranean basins. Parameters directly related to human activities and decisions (such as cover management factor, C) were also influential, especially for sediment exported. The importance of these human-related parameters in the sediment export process suggests that mitigation measures have the potential to at least partially ameliorate climate-change driven changes in sediment exportation. PMID:25302447

  13. Assessment of the Potential Impacts of Wheat Plant Traits across Environments by Combining Crop Modeling and Global Sensitivity Analysis.

    PubMed

    Casadebaig, Pierre; Zheng, Bangyou; Chapman, Scott; Huth, Neil; Faivre, Robert; Chenu, Karine

    2016-01-01

    A crop can be viewed as a complex system with outputs (e.g. yield) that are affected by inputs of genetic, physiology, pedo-climatic and management information. Application of numerical methods for model exploration assist in evaluating the major most influential inputs, providing the simulation model is a credible description of the biological system. A sensitivity analysis was used to assess the simulated impact on yield of a suite of traits involved in major processes of crop growth and development, and to evaluate how the simulated value of such traits varies across environments and in relation to other traits (which can be interpreted as a virtual change in genetic background). The study focused on wheat in Australia, with an emphasis on adaptation to low rainfall conditions. A large set of traits (90) was evaluated in a wide target population of environments (4 sites × 125 years), management practices (3 sowing dates × 3 nitrogen fertilization levels) and CO2 (2 levels). The Morris sensitivity analysis method was used to sample the parameter space and reduce computational requirements, while maintaining a realistic representation of the targeted trait × environment × management landscape (∼ 82 million individual simulations in total). The patterns of parameter × environment × management interactions were investigated for the most influential parameters, considering a potential genetic range of +/- 20% compared to a reference cultivar. Main (i.e. linear) and interaction (i.e. non-linear and interaction) sensitivity indices calculated for most of APSIM-Wheat parameters allowed the identification of 42 parameters substantially impacting yield in most target environments. Among these, a subset of parameters related to phenology, resource acquisition, resource use efficiency and biomass allocation were identified as potential candidates for crop (and model) improvement. PMID:26799483

  14. Assessment of the Potential Impacts of Wheat Plant Traits across Environments by Combining Crop Modeling and Global Sensitivity Analysis

    PubMed Central

    Casadebaig, Pierre; Zheng, Bangyou; Chapman, Scott; Huth, Neil; Faivre, Robert; Chenu, Karine

    2016-01-01

    A crop can be viewed as a complex system with outputs (e.g. yield) that are affected by inputs of genetic, physiology, pedo-climatic and management information. Application of numerical methods for model exploration assist in evaluating the major most influential inputs, providing the simulation model is a credible description of the biological system. A sensitivity analysis was used to assess the simulated impact on yield of a suite of traits involved in major processes of crop growth and development, and to evaluate how the simulated value of such traits varies across environments and in relation to other traits (which can be interpreted as a virtual change in genetic background). The study focused on wheat in Australia, with an emphasis on adaptation to low rainfall conditions. A large set of traits (90) was evaluated in a wide target population of environments (4 sites × 125 years), management practices (3 sowing dates × 3 nitrogen fertilization levels) and CO2 (2 levels). The Morris sensitivity analysis method was used to sample the parameter space and reduce computational requirements, while maintaining a realistic representation of the targeted trait × environment × management landscape (∼ 82 million individual simulations in total). The patterns of parameter × environment × management interactions were investigated for the most influential parameters, considering a potential genetic range of +/- 20% compared to a reference cultivar. Main (i.e. linear) and interaction (i.e. non-linear and interaction) sensitivity indices calculated for most of APSIM-Wheat parameters allowed the identification of 42 parameters substantially impacting yield in most target environments. Among these, a subset of parameters related to phenology, resource acquisition, resource use efficiency and biomass allocation were identified as potential candidates for crop (and model) improvement. PMID:26799483

  15. Global sensitivity analysis of CMIP5 predictions of future changes in precipitation, reference evapotranspiration and drought index (SPEI) over the U.S.

    NASA Astrophysics Data System (ADS)

    Chang, S. J.; Graham, W. D.; Hwang, S.

    2014-12-01

    Projecting evapotranspiration for estimating future agricultural irrigation demand is uncertain because estimates of future precipitation and evapotranspiration vary significantly depending on the Global Climate Model (GCM), future RCP emission scenario and reference evapotranspiration (RET) estimation method selected. Understanding the relative contributions of these various sources of uncertainty is important for effective long-term water resource planning. In this study variance-based sensitivity analysis (Saltelli et al., 2010) was used to assess the sensitivity of estimated future changes in precipitation, RET and Standardized Precipitation Evapotranspiration Index drought index (SPEI) to 9 GCMs, 3 RCP scenarios, and 11 ET estimation methods over 9 regions of the United States for two future periods: 2030-2060 and 2070-2100. Future changes in precipitation were found to be most sensitive to GCM selection for all U.S. regions and both future periods. For projecting changes in future RET and SPEI, the selection of ET method and GCM were more sensitive than the selection of RCP scenario. In general, changes in ET and SPEI were most sensitive to ET estimation methods in the cold season and to GCM selection in the warm season; however sensitivities differed by region, season and future period. This study underscores the importance of evaluating projections of future agricultural irrigation demand for an ensemble of GCMs and ET estimation methods rather than relying on few GCMs and a single ET estimation method.

  16. Cosmopolitan Sensitivities, Vulnerability, and Global Englishes

    ERIC Educational Resources Information Center

    Jacobsen, Ushma Chauhan

    2015-01-01

    This paper is the outcome of an afterthought that assembles connections between three elements: the ambitions of cultivating cosmopolitan sensitivities that circulate vibrantly in connection with the internationalization of higher education, a course on Global Englishes at a Danish university and the sensation of vulnerability. It discusses the…

  17. Sensitivity Test Analysis

    Energy Science and Technology Software Center (ESTSC)

    1992-02-20

    SENSIT,MUSIG,COMSEN is a set of three related programs for sensitivity test analysis. SENSIT conducts sensitivity tests. These tests are also known as threshold tests, LD50 tests, gap tests, drop weight tests, etc. SENSIT interactively instructs the experimenter on the proper level at which to stress the next specimen, based on the results of previous responses. MUSIG analyzes the results of a sensitivity test to determine the mean and standard deviation of the underlying population bymore » computing maximum likelihood estimates of these parameters. MUSIG also computes likelihood ratio joint confidence regions and individual confidence intervals. COMSEN compares the results of two sensitivity tests to see if the underlying populations are significantly different. COMSEN provides an unbiased method of distinguishing between statistical variation of the estimates of the parameters of the population and true population difference.« less

  18. LISA Telescope Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Waluschka, Eugene; Krebs, Carolyn (Technical Monitor)

    2001-01-01

    The results of a LISA telescope sensitivity analysis will be presented, The emphasis will be on the outgoing beam of the Dall-Kirkham' telescope and its far field phase patterns. The computed sensitivity analysis will include motions of the secondary with respect to the primary, changes in shape of the primary and secondary, effect of aberrations of the input laser beam and the effect the telescope thin film coatings on polarization. An end-to-end optical model will also be discussed.

  19. Global Sensitivity Measures from Given Data

    SciTech Connect

    Elmar Plischke; Emanuele Borgonovo; Curtis L. Smith

    2013-05-01

    Simulation models support managers in the solution of complex problems. International agencies recommend uncertainty and global sensitivity methods as best practice in the audit, validation and application of scientific codes. However, numerical complexity, especially in the presence of a high number of factors, induces analysts to employ less informative but numerically cheaper methods. This work introduces a design for estimating global sensitivity indices from given data (including simulation input–output data), at the minimum computational cost. We address the problem starting with a statistic based on the L1-norm. A formal definition of the estimators is provided and corresponding consistency theorems are proved. The determination of confidence intervals through a bias-reducing bootstrap estimator is investigated. The strategy is applied in the identification of the key drivers of uncertainty for the complex computer code developed at the National Aeronautics and Space Administration (NASA) assessing the risk of lunar space missions. We also introduce a symmetry result that enables the estimation of global sensitivity measures to datasets produced outside a conventional input–output functional framework.

  20. Sensitivity Analysis Without Assumptions

    PubMed Central

    VanderWeele, Tyler J.

    2016-01-01

    Unmeasured confounding may undermine the validity of causal inference with observational studies. Sensitivity analysis provides an attractive way to partially circumvent this issue by assessing the potential influence of unmeasured confounding on causal conclusions. However, previous sensitivity analysis approaches often make strong and untestable assumptions such as having an unmeasured confounder that is binary, or having no interaction between the effects of the exposure and the confounder on the outcome, or having only one unmeasured confounder. Without imposing any assumptions on the unmeasured confounder or confounders, we derive a bounding factor and a sharp inequality such that the sensitivity analysis parameters must satisfy the inequality if an unmeasured confounder is to explain away the observed effect estimate or reduce it to a particular level. Our approach is easy to implement and involves only two sensitivity parameters. Surprisingly, our bounding factor, which makes no simplifying assumptions, is no more conservative than a number of previous sensitivity analysis techniques that do make assumptions. Our new bounding factor implies not only the traditional Cornfield conditions that both the relative risk of the exposure on the confounder and that of the confounder on the outcome must satisfy but also a high threshold that the maximum of these relative risks must satisfy. Furthermore, this new bounding factor can be viewed as a measure of the strength of confounding between the exposure and the outcome induced by a confounder. PMID:26841057

  1. Sensitivity analysis of SPURR

    SciTech Connect

    Witholder, R.E.

    1980-04-01

    The Solar Energy Research Institute has conducted a limited sensitivity analysis on a System for Projecting the Utilization of Renewable Resources (SPURR). The study utilized the Domestic Policy Review scenario for SPURR agricultural and industrial process heat and utility market sectors. This sensitivity analysis determines whether variations in solar system capital cost, operation and maintenance cost, and fuel cost (biomass only) correlate with intuitive expectations. The results of this effort contribute to a much larger issue: validation of SPURR. Such a study has practical applications for engineering improvements in solar technologies and is useful as a planning tool in the R and D allocation process.

  2. Arbitrary-resolution global sensitivity kernels

    NASA Astrophysics Data System (ADS)

    Nissen-Meyer, T.; Fournier, A.; Dahlen, F.

    2007-12-01

    Extracting observables out of any part of a seismogram (e.g. including diffracted phases such as Pdiff) necessitates the knowledge of 3-D time-space wavefields for the Green functions that form the backbone of Fréchet sensitivity kernels. While known for a while, this idea is still computationally intractable in 3-D, facing major simulation and storage issues when high-frequency wavefields are considered at the global scale. We recently developed a new "collapsed-dimension" spectral-element method that solves the 3-D system of elastodynamic equations in a 2-D space, based on exploring symmetry considerations of the seismic-wave radiation patterns. We will present the technical background on the computation of waveform kernels, various examples of time- and frequency-dependent sensitivity kernels and subsequently extracted time-window kernels (e.g. banana- doughnuts). Given the computationally light-weighted 2-D nature, we will explore some crucial parameters such as excitation type, source time functions, frequency, azimuth, discontinuity locations, and phase type, i.e. an a priori view into how, when, and where seismograms carry 3-D Earth signature. A once-and-for-all database of 2-D waveforms for various source depths shall then serve as a complete set of global time-space sensitivity for a given spherically symmetric background model, thereby allowing for tomographic inversions with arbitrary frequencies, observables, and phases.

  3. Analysis of the behavior of a rainfall-runoff model using three global sensitivity analysis methods evaluated at different temporal scales

    NASA Astrophysics Data System (ADS)

    Massmann, C.; Holzmann, H.

    2012-12-01

    SummaryThe effect of 11 parameters on the discharge of a conceptual rainfall-runoff model was analyzed for a small Austrian catchment. The sensitivities were computed using three methods: Sobol's indices, the mutual entropy and regional sensitivity analysis (RSA). The calculations were carried out for different temporal scales of evaluation ranging from daily to a multiannual period. A comparison of the methods shows that the mutual entropy and the RSA methods give more robust results than Sobol's method, which shows a higher variability in the sensitivities when they are calculated using different data sets. While all sensitivity methods are suitable for identifying the most sensitive parameters of a model, there are increasing differences in the results when the parameters become less important and also when shorter temporal scales are considered. A correlation analysis further indicated that the periods in which the parameter sensitivity rankings did not agree between the different methods are characterized by a higher impact of the parameters interactions on the modeled discharge. An analysis of the parameter sensitivity across the scales showed that the number of important parameter decreases when longer evaluation periods are considered. For instance, it was observed that all parameters were important at least during 1 day a daily scale, while at a yearly scale only the parameters characterizing the soil storage and the recession constants for interflow and percolation had high sensitivities. With respect to the impact of the interactions between parameters on the model results, it was observed that the largest effect is related to the parameters describing the size of the soil storage, the interflow and the percolation flow recession constants. Further, it was observed that there is a positive correlation between the importance of the interactions and the measured discharge. While the study focuses on quantitative sensitivity measures, it is also highlighted

  4. RESRAD parameter sensitivity analysis

    SciTech Connect

    Cheng, J.J.; Yu, C.; Zielen, A.J.

    1991-08-01

    Three methods were used to perform a sensitivity analysis of RESRAD code input parameters -- enhancement of RESRAD by the Gradient Enhanced Software System (GRESS) package, direct parameter perturbation, and graphic comparison. Evaluation of these methods indicated that (1) the enhancement of RESRAD by GRESS has limitations and should be used cautiously, (2) direct parameter perturbation is tedious to implement, and (3) the graphics capability of RESRAD 4.0 is the most direct and convenient method for performing sensitivity analyses. This report describes procedures for implementing these methods and presents a comparison of results. 3 refs., 9 figs., 8 tabs.

  5. Nitrogen cycle implementation in the Dynamic Global Vegetation Model LPJmL: description, evaluation and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Vilain, Guillaume; Müller, Christoph; Schaphoff, Sibyll; Lotze-Campen, Hermann; Feulner, Georg

    2013-04-01

    Nitrogen (N) cycling affects carbon uptake by the terrestrial biosphere and imposes controls on carbon cycle response to variation in temperature and precipitation. In the absence of carbon-nitrogen interactions, surface warming significantly reduces carbon sequestration in both vegetation and soil by increasing respiration and decomposition (a positive feedback). If plant carbon uptake, however, is assumed to be nitrogen limited, an increase in decomposition leads to an increase in nitrogen availability stimulating plant growth. The resulting increase in carbon uptake by vegetation can exceed carbon loss from the soil, leading to enhanced carbon sequestration (a negative feedback). Cultivation of biofuel crops is expanding because of its potential for climate mitigation, whereas the environmental impacts of bioenergy production still remain unknown. While carbon payback times are being increasingly investigated, non-CO2 greenhouse gas emissions of bioenergy production have received little attention so far. We introduced a process-based nitrogen cycle to the LPJmL model at the global scale (each grid cell being 0.5° latitude by 0.5° longitude in size). The model captures mechanisms essential for N cycling and their feedbacks on C cycling: the uptake, allocation and turnover on N in plants, N limitation of plant productivity, and soil N transformation including mineralization, N2 fixation, nitrification and denitrification, NH3 volatilization, N leaching and N2O emissions. Our model captures many essential characteristics of C-N interactions and is capable of broadly recreating spatial and temporal variations in N and C dynamics. Here we evaluate LPJmL by comparing the predicted variables with data from sites with sufficient observations to describe ecosystem nitrogen and carbon fluxes and contents and their responses to climate as well as with estimates of N-dynamics at the global scale. The simulations presented here use no site-specific parameterizations in

  6. Global-scale projection and its sensitivity analysis of the health burden attributable to childhood undernutrition under the latest scenario framework for climate change research

    NASA Astrophysics Data System (ADS)

    Ishida, Hiroyuki; Kobayashi, Shota; Kanae, Shinjiro; Hasegawa, Tomoko; Fujimori, Shinichiro; Shin, Yonghee; Takahashi, Kiyoshi; Masui, Toshihiko; Tanaka, Akemi; Honda, Yasushi

    2014-05-01

    This study assessed the health burden attributable to childhood underweight through 2050 focusing on disability-adjusted life years (DALYs), by considering the latest scenarios for climate change studies (representative concentration pathways and shared socioeconomic pathways (SSPs)) and conducting sensitivity analysis. A regression model for estimating DALYs attributable to childhood underweight (DAtU) was developed using the relationship between DAtU and childhood stunting. We combined a global computable general equilibrium model, a crop model, and two regression models to assess the future health burden. We found that (i) world total DAtU decreases from 2005 by 28 ˜ 63% in 2050 depending on the socioeconomic scenarios. Per capita DAtU also decreases in all regions under either scenario in 2050, but the decreases vary significantly by regions and scenarios. (ii) The impact of climate change is relatively small in the framework of this study but, on the other hand, socioeconomic conditions have a great impact on the future health burden. (iii) Parameter uncertainty of the regression models is the second largest factor on uncertainty of the result following the changes in socioeconomic condition, and uncertainty derived from the difference in global circulation models is the smallest in the framework of this study.

  7. Scaling in sensitivity analysis

    USGS Publications Warehouse

    Link, W.A.; Doherty, P.F., Jr.

    2002-01-01

    Population matrix models allow sets of demographic parameters to be summarized by a single value 8, the finite rate of population increase. The consequences of change in individual demographic parameters are naturally measured by the corresponding changes in 8; sensitivity analyses compare demographic parameters on the basis of these changes. These comparisons are complicated by issues of scale. Elasticity analysis attempts to deal with issues of scale by comparing the effects of proportional changes in demographic parameters, but leads to inconsistencies in evaluating demographic rates. We discuss this and other problems of scaling in sensitivity analysis, and suggest a simple criterion for choosing appropriate scales. We apply our suggestions to data for the killer whale, Orcinus orca.

  8. LISA Telescope Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Waluschka, Eugene; Krebs, Carolyn (Technical Monitor)

    2002-01-01

    The Laser Interferometer Space Antenna (LISA) for the detection of Gravitational Waves is a very long baseline interferometer which will measure the changes in the distance of a five million kilometer arm to picometer accuracies. As with any optical system, even one with such very large separations between the transmitting and receiving, telescopes, a sensitivity analysis should be performed to see how, in this case, the far field phase varies when the telescope parameters change as a result of small temperature changes.

  9. Sensitivity analysis of a wing aeroelastic response

    NASA Technical Reports Server (NTRS)

    Kapania, Rakesh K.; Eldred, Lloyd B.; Barthelemy, Jean-Francois M.

    1991-01-01

    A variation of Sobieski's Global Sensitivity Equations (GSE) approach is implemented to obtain the sensitivity of the static aeroelastic response of a three-dimensional wing model. The formulation is quite general and accepts any aerodynamics and structural analysis capability. An interface code is written to convert one analysis's output to the other's input, and visa versa. Local sensitivity derivatives are calculated by either analytic methods or finite difference techniques. A program to combine the local sensitivities, such as the sensitivity of the stiffness matrix or the aerodynamic kernel matrix, into global sensitivity derivatives is developed. The aerodynamic analysis package FAST, using a lifting surface theory, and a structural package, ELAPS, implementing Giles' equivalent plate model are used.

  10. The application of global sensitivity analysis in the development of a physiologically based pharmacokinetic model for m-xylene and ethanol co-exposure in humans

    PubMed Central

    Loizou, George D.; McNally, Kevin; Jones, Kate; Cocker, John

    2015-01-01

    Global sensitivity analysis (SA) was used during the development phase of a binary chemical physiologically based pharmacokinetic (PBPK) model used for the analysis of m-xylene and ethanol co-exposure in humans. SA was used to identify those parameters which had the most significant impact on variability of venous blood and exhaled m-xylene and urinary excretion of the major metabolite of m-xylene metabolism, 3-methyl hippuric acid. This analysis informed the selection of parameters for estimation/calibration by fitting to measured biological monitoring (BM) data in a Bayesian framework using Markov chain Monte Carlo (MCMC) simulation. Data generated in controlled human studies were shown to be useful for investigating the structure and quantitative outputs of PBPK models as well as the biological plausibility and variability of parameters for which measured values were not available. This approach ensured that a priori knowledge in the form of prior distributions was ascribed only to those parameters that were identified as having the greatest impact on variability. This is an efficient approach which helps reduce computational cost. PMID:26175688

  11. The application of global sensitivity analysis in the development of a physiologically based pharmacokinetic model for m-xylene and ethanol co-exposure in humans.

    PubMed

    Loizou, George D; McNally, Kevin; Jones, Kate; Cocker, John

    2015-01-01

    Global sensitivity analysis (SA) was used during the development phase of a binary chemical physiologically based pharmacokinetic (PBPK) model used for the analysis of m-xylene and ethanol co-exposure in humans. SA was used to identify those parameters which had the most significant impact on variability of venous blood and exhaled m-xylene and urinary excretion of the major metabolite of m-xylene metabolism, 3-methyl hippuric acid. This analysis informed the selection of parameters for estimation/calibration by fitting to measured biological monitoring (BM) data in a Bayesian framework using Markov chain Monte Carlo (MCMC) simulation. Data generated in controlled human studies were shown to be useful for investigating the structure and quantitative outputs of PBPK models as well as the biological plausibility and variability of parameters for which measured values were not available. This approach ensured that a priori knowledge in the form of prior distributions was ascribed only to those parameters that were identified as having the greatest impact on variability. This is an efficient approach which helps reduce computational cost. PMID:26175688

  12. Sensitivity testing and analysis

    SciTech Connect

    Neyer, B.T.

    1991-01-01

    New methods of sensitivity testing and analysis are proposed. The new test method utilizes Maximum Likelihood Estimates to pick the next test level in order to maximize knowledge of both the mean, {mu}, and the standard deviation, {sigma} of the population. Simulation results demonstrate that this new test provides better estimators (less bias and smaller variance) of both {mu} and {sigma} than the other commonly used tests (Probit, Bruceton, Robbins-Monro, Langlie). A new method of analyzing sensitivity tests is also proposed. It uses the Likelihood Ratio Test to compute regions of arbitrary confidence. It can calculate confidence regions, for {mu}, {sigma}, and arbitrary percentiles. Unlike presently used methods, such as the program ASENT which is based on the Cramer-Rao theorem, it can analyze the results of all sensitivity tests, and it does not significantly underestimate the size of the confidence regions. The new test and analysis methods will be explained and compared to the presently used methods. 19 refs., 12 figs.

  13. Determination of DNA methylation associated with Acer rubrum (red maple) adaptation to metals: analysis of global DNA modifications and methylation-sensitive amplified polymorphism.

    PubMed

    Kim, Nam-Soo; Im, Min-Ji; Nkongolo, Kabwe

    2016-08-01

    Red maple (Acer rubum), a common deciduous tree species in Northern Ontario, has shown resistance to soil metal contamination. Previous reports have indicated that this plant does not accumulate metals in its tissue. However, low level of nickel and copper corresponding to the bioavailable levels in contaminated soils in Northern Ontario causes severe physiological damages. No differentiation between metal-contaminated and uncontaminated populations has been reported based on genetic analyses. The main objective of this study was to assess whether DNA methylation is involved in A. rubrum adaptation to soil metal contamination. Global cytosine and methylation-sensitive amplified polymorphism (MSAP) analyses were carried out in A. rubrum populations from metal-contaminated and uncontaminated sites. The global modified cytosine ratios in genomic DNA revealed a significant decrease in cytosine methylation in genotypes from a metal-contaminated site compared to uncontaminated populations. Other genotypes from a different metal-contaminated site within the same region appear to be recalcitrant to metal-induced DNA alterations even ≥30 years of tree life exposure to nickel and copper. MSAP analysis showed a high level of polymorphisms in both uncontaminated (77%) and metal-contaminated (72%) populations. Overall, 205 CCGG loci were identified in which 127 were methylated in either outer or inner cytosine. No differentiation among populations was established based on several genetic parameters tested. The variations for nonmethylated and methylated loci were compared by analysis of molecular variance (AMOVA). For methylated loci, molecular variance among and within populations was 1.5% and 13.2%, respectively. These values were low (0.6% for among populations and 5.8% for within populations) for unmethylated loci. Metal contamination is seen to affect methylation of cytosine residues in CCGG motifs in the A. rubrum populations that were analyzed. PMID:27547351

  14. The Sensitivity of a Global Ocean Model to Wind Forcing: A Test Using Sea Level and Wind Observations from Satellites and Operational Analysis

    NASA Technical Reports Server (NTRS)

    Fu, L. L.; Chao, Y.

    1997-01-01

    Investigated in this study is the response of a global ocean general circulation model to forcing provided by two wind products: operational analysis from the National Center for Environmental Prediction (NCEP); observations made by the ERS-1 radar scatterometer.

  15. Assessment of the contamination of drinking water supply wells by pesticides from surface water resources using a finite element reactive transport model and global sensitivity analysis techniques

    NASA Astrophysics Data System (ADS)

    Malaguerra, Flavio; Albrechtsen, Hans-Jørgen; Binning, Philip John

    2013-01-01

    SummaryA reactive transport model is employed to evaluate the potential for contamination of drinking water wells by surface water pollution. The model considers various geologic settings, includes sorption and degradation processes and is tested by comparison with data from a tracer experiment where fluorescein dye injected in a river is monitored at nearby drinking water wells. Three compounds were considered: an older pesticide MCPP (Mecoprop) which is mobile and relatively persistent, glyphosate (Roundup), a newer biodegradable and strongly sorbing pesticide, and its degradation product AMPA. Global sensitivity analysis using the Morris method is employed to identify the dominant model parameters. Results show that the characteristics of clay aquitards (degree of fracturing and thickness), pollutant properties and well depths are crucial factors when evaluating the risk of drinking water well contamination from surface water. This study suggests that it is unlikely that glyphosate in streams can pose a threat to drinking water wells, while MCPP in surface water can represent a risk: MCPP concentration at the drinking water well can be up to 7% of surface water concentration in confined aquifers and up to 10% in unconfined aquifers. Thus, the presence of confining clay aquitards may not prevent contamination of drinking water wells by persistent compounds in surface water. Results are consistent with data on pesticide occurrence in Denmark where pesticides are found at higher concentrations at shallow depths and close to streams.

  16. Using global sensitivity analysis to understand higher order interactions in complex models: an application of GSA on the Revised Universal Soil Loss Equation (RUSLE) to quantify model sensitivity and implications for ecosystem services management in Costa Rica

    NASA Astrophysics Data System (ADS)

    Fremier, A. K.; Estrada Carmona, N.; Harper, E.; DeClerck, F.

    2011-12-01

    Appropriate application of complex models to estimate system behavior requires understanding the influence of model structure and parameter estimates on model output. To date, most researchers perform local sensitivity analyses, rather than global, because of computational time and quantity of data produced. Local sensitivity analyses are limited in quantifying the higher order interactions among parameters, which could lead to incomplete analysis of model behavior. To address this concern, we performed a GSA on a commonly applied equation for soil loss - the Revised Universal Soil Loss Equation. USLE is an empirical model built on plot-scale data from the USA and the Revised version (RUSLE) includes improved equations for wider conditions, with 25 parameters grouped into six factors to estimate long-term plot and watershed scale soil loss. Despite RUSLE's widespread application, a complete sensitivity analysis has yet to be performed. In this research, we applied a GSA to plot and watershed scale data from the US and Costa Rica to parameterize the RUSLE in an effort to understand the relative importance of model factors and parameters across wide environmental space. We analyzed the GSA results using Random Forest, a statistical approach to evaluate parameter importance accounting for the higher order interactions, and used Classification and Regression Trees to show the dominant trends in complex interactions. In all GSA calculations the management of cover crops (C factor) ranks the highest among factors (compared to rain-runoff erosivity, topography, support practices, and soil erodibility). This is counter to previous sensitivity analyses where the topographic factor was determined to be the most important. The GSA finding is consistent across multiple model runs, including data from the US, Costa Rica, and a synthetic dataset of the widest theoretical space. The three most important parameters were: Mass density of live and dead roots found in the upper inch

  17. Saltelli Global Sensitivity Analysis and Simulation Modelling to Identify Intervention Strategies to Reduce the Prevalence of Escherichia coli O157 Contaminated Beef Carcasses

    PubMed Central

    Brookes, Victoria J.; Jordan, David; Davis, Stephen; Ward, Michael P.; Heller, Jane

    2015-01-01

    Introduction Strains of Shiga-toxin producing Escherichia coli O157 (STEC O157) are important foodborne pathogens in humans, and outbreaks of illness have been associated with consumption of undercooked beef. Here, we determine the most effective intervention strategies to reduce the prevalence of STEC O157 contaminated beef carcasses using a modelling approach. Method A computational model simulated events and processes in the beef harvest chain. Information from empirical studies was used to parameterise the model. Variance-based global sensitivity analysis (GSA) using the Saltelli method identified variables with the greatest influence on the prevalence of STEC O157 contaminated carcasses. Following a baseline scenario (no interventions), a series of simulations systematically introduced and tested interventions based on influential variables identified by repeated Saltelli GSA, to determine the most effective intervention strategy. Results Transfer of STEC O157 from hide or gastro-intestinal tract to carcass (improved abattoir hygiene) had the greatest influence on the prevalence of contaminated carcases. Due to interactions between inputs (identified by Saltelli GSA), combinations of interventions based on improved abattoir hygiene achieved a greater reduction in maximum prevalence than would be expected from an additive effect of single interventions. The most effective combination was improved abattoir hygiene with vaccination, which achieved a greater than ten-fold decrease in maximum prevalence compared to the baseline scenario. Conclusion Study results suggest that effective interventions to reduce the prevalence of STEC O157 contaminated carcasses should initially be based on improved abattoir hygiene. However, the effect of improved abattoir hygiene on the distribution of STEC O157 concentration on carcasses is an important information gap—further empirical research is required to determine whether reduced prevalence of contaminated carcasses is

  18. Sensitivity of alpine watersheds to global change

    NASA Astrophysics Data System (ADS)

    Zierl, B.; Bugmann, H.

    2003-04-01

    Mountains provide society with a wide range of goods and services, so-called mountain ecosystem services. Besides many others, these services include the most precious element for life on earth: fresh water. Global change imposes significant environmental pressure on mountain watersheds. Climate change is predicted to modify water availability as well as shift its seasonality. In fact, the continued capacity of mountain regions to provide fresh water to society is threatened by the impact of environmental and social changes. We use RHESSys (Regional HydroEcological Simulation System) to analyse the impact of climate as well as land use change (e.g. afforestation or deforestation) on hydrological processes in mountain catchments using sophisticated climate and land use scenarios. RHESSys combines distributed flow modelling based on TOPMODEL with an ecophysiological canopy model based on BIOME-BGC and a climate interpolation scheme based on MTCLIM. It is a spatially distributed daily time step model designed to solve the coupled cycles of water, carbon, and nitrogen in mountain catchments. The model is applied to various mountain catchments in the alpine area. Dynamic hydrological and ecological properties such as river discharge, seasonality of discharge, peak flows, snow cover processes, soil moisture, and the feedback of a changing biosphere on hydrology are simulated under current as well as under changed environmental conditions. Results of these studies will be presented and discussed. This project is part of an over overarching EU-project called ATEAM (acronym for Advanced Terrestrial Ecosystem Analysis and Modelling) assessing the vulnerability of European ecosystem services.

  19. Sensitivity analysis in computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Bristow, D. R.

    1984-01-01

    Information on sensitivity analysis in computational aerodynamics is given in outline, graphical, and chart form. The prediction accuracy if the MCAERO program, a perturbation analysis method, is discussed. A procedure for calculating perturbation matrix, baseline wing paneling for perturbation analysis test cases and applications of an inviscid sensitivity matrix are among the topics covered.

  20. Global Sensitivity Analysis of a Mathematical Model of Acute Inflammation Identifies Nonlinear Dependence of Cumulative Tissue Damage on Host Interleukin-6 Responses

    PubMed Central

    Mathew, Shibin; Bartels, John; Banerjee, Ipsita; Vodovotz, Yoram

    2014-01-01

    The precise inflammatory role of the cytokine interleukin (IL)-6 and its utility as a biomarker or therapeutic target have been the source of much debate, presumably due to the complex pro- and anti-inflammatory effects of this cytokine. We previously developed a nonlinear ordinary differential equation (ODE) model to explain the dynamics of endotoxin (lipopolysaccharide; LPS)-induced acute inflammation and associated whole-animal damage/dysfunction (a proxy for the health of the organism), along with the inflammatory mediators tumor necrosis factor (TNF)-α, IL-6, IL-10, and nitric oxide (NO). The model was partially calibrated using data from endotoxemic C57Bl/6 mice. Herein, we investigated the sensitivity of the area under the damage curve (AUCD) to the 51 rate parameters of the ODE model for different levels of simulated LPS challenges using a global sensitivity approach called Random Sampling High Dimensional Model Representation (RS-HDMR). We explored sufficient parametric Monte Carlo samples to generate the variance-based Sobol' global sensitivity indices, and found that inflammatory damage was highly sensitive to the parameters affecting the activity of IL-6 during the different stages of acute inflammation. The AUCIL6 showed a bimodal distribution, with the lower peak representing healthy response and the higher peak representing sustained inflammation. Damage was minimal at low AUCIL6, giving rise to a healthy response. In contrast, intermediate levels of AUCIL6 resulted in high damage, and this was due to the insufficiency of damage recovery driven by anti-inflammatory responses and the activation of positive feedback sustained by IL-6. At high AUCIL6, damage recovery was interestingly restored in some population of simulated animals due to the NO-mediated anti-inflammatory responses. These observations suggest that the host's health status during acute inflammation depends in a nonlinear fashion on the magnitude of the inflammatory stimulus, on the

  1. Extended Forward Sensitivity Analysis for Uncertainty Quantification

    SciTech Connect

    Haihua Zhao; Vincent A. Mousseau

    2013-01-01

    This paper presents the extended forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical parameters, the simulation is allowed to run at optimized time and space steps without affecting the confidence of the physical parameter sensitivity results. The time and space steps forward sensitivity analysis method can also replace the traditional time step and grid convergence study with much less computational cost. Two well-defined benchmark problems with manufactured solutions are utilized to demonstrate the method.

  2. Sensitivity of global terrestrial ecosystems to climate variability.

    PubMed

    Seddon, Alistair W R; Macias-Fauria, Marc; Long, Peter R; Benz, David; Willis, Kathy J

    2016-03-10

    The identification of properties that contribute to the persistence and resilience of ecosystems despite climate change constitutes a research priority of global relevance. Here we present a novel, empirical approach to assess the relative sensitivity of ecosystems to climate variability, one property of resilience that builds on theoretical modelling work recognizing that systems closer to critical thresholds respond more sensitively to external perturbations. We develop a new metric, the vegetation sensitivity index, that identifies areas sensitive to climate variability over the past 14 years. The metric uses time series data derived from the moderate-resolution imaging spectroradiometer (MODIS) enhanced vegetation index, and three climatic variables that drive vegetation productivity (air temperature, water availability and cloud cover). Underlying the analysis is an autoregressive modelling approach used to identify climate drivers of vegetation productivity on monthly timescales, in addition to regions with memory effects and reduced response rates to external forcing. We find ecologically sensitive regions with amplified responses to climate variability in the Arctic tundra, parts of the boreal forest belt, the tropical rainforest, alpine regions worldwide, steppe and prairie regions of central Asia and North and South America, the Caatinga deciduous forest in eastern South America, and eastern areas of Australia. Our study provides a quantitative methodology for assessing the relative response rate of ecosystems--be they natural or with a strong anthropogenic signature--to environmental variability, which is the first step towards addressing why some regions appear to be more sensitive than others, and what impact this has on the resilience of ecosystem service provision and human well-being. PMID:26886790

  3. Sensitivity of global terrestrial ecosystems to climate variability

    NASA Astrophysics Data System (ADS)

    Seddon, Alistair W. R.; Macias-Fauria, Marc; Long, Peter R.; Benz, David; Willis, Kathy J.

    2016-03-01

    The identification of properties that contribute to the persistence and resilience of ecosystems despite climate change constitutes a research priority of global relevance. Here we present a novel, empirical approach to assess the relative sensitivity of ecosystems to climate variability, one property of resilience that builds on theoretical modelling work recognizing that systems closer to critical thresholds respond more sensitively to external perturbations. We develop a new metric, the vegetation sensitivity index, that identifies areas sensitive to climate variability over the past 14 years. The metric uses time series data derived from the moderate-resolution imaging spectroradiometer (MODIS) enhanced vegetation index, and three climatic variables that drive vegetation productivity (air temperature, water availability and cloud cover). Underlying the analysis is an autoregressive modelling approach used to identify climate drivers of vegetation productivity on monthly timescales, in addition to regions with memory effects and reduced response rates to external forcing. We find ecologically sensitive regions with amplified responses to climate variability in the Arctic tundra, parts of the boreal forest belt, the tropical rainforest, alpine regions worldwide, steppe and prairie regions of central Asia and North and South America, the Caatinga deciduous forest in eastern South America, and eastern areas of Australia. Our study provides a quantitative methodology for assessing the relative response rate of ecosystems—be they natural or with a strong anthropogenic signature—to environmental variability, which is the first step towards addressing why some regions appear to be more sensitive than others, and what impact this has on the resilience of ecosystem service provision and human well-being.

  4. Transcriptomic Analysis of Chloroquine-Sensitive and Chloroquine-Resistant Strains of Plasmodium falciparum: Toward Malaria Diagnostics and Therapeutics for Global Health.

    PubMed

    Antony, Hiasindh Ashmi; Pathak, Vrushali; Parija, Subhash Chandra; Ghosh, Kanjaksha; Bhattacherjee, Amrita

    2016-07-01

    Increasing drug resistance in Plasmodium falciparum is an important global health burden because it reverses the malarial control achieved so far. Hence, understanding the molecular mechanisms of drug resistance is the epicenter of the development agenda for novel diagnostic and therapeutic (drugs/vaccines) targets for malaria. In this study, we report global comparative transcriptome profiling (RNA-Seq) to characterize the difference in the transcriptome between 48-h intraerythrocytic stage of chloroquine-sensitive and chloroquine-resistant P. falciparum (3D7 and Dd2) strains. The two P. falciparum 3D7 and Dd2 strains have distant geographical origin, the Netherlands and Indochina, respectively. The strains were cultured by an in vitro method and harvested at the 48-h intraerythrocytic stage having 5% parasitemia. The whole transcriptome sequencing was performed using Illumina HiSeq 2500 platform with paired-end reads. The reads were aligned with the reference P. falciparum genome. The alignment percentages for 3D7, Dd2, and Dd2 w/CQ strains were 85.40%, 89.13%, and 84%, respectively. Nearly 40% of the transcripts had known gene function, whereas the remaining genes (about 60%) had unknown function. The genes involved in immune evasion showed a significant difference between the strains. The differential gene expression between the sensitive and resistant strains was measured using the cuffdiff program with the p-value cutoff ≤0.05. Collectively, this study identified differentially expressed genes between 3D7 and Dd2 strains, where we found 89 genes to be upregulated and 227 to be downregulated. On the contrary, for 3D7 and Dd2 w/CQ strains, 45 genes were upregulated and 409 were downregulated. These differentially regulated genes code, by and large, for surface antigens involved in invasion, pathogenesis, and host-parasite interactions, among others. The exhibition of transcriptional differences between these strains of P. falciparum contributes to our

  5. Multidisciplinary optimization of controlled space structures with global sensitivity equations

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; James, Benjamin B.; Graves, Philip C.; Woodard, Stanley E.

    1991-01-01

    A new method for the preliminary design of controlled space structures is presented. The method coordinates standard finite element structural analysis, multivariable controls, and nonlinear programming codes and allows simultaneous optimization of the structures and control systems of a spacecraft. Global sensitivity equations are a key feature of this method. The preliminary design of a generic geostationary platform is used to demonstrate the multidisciplinary optimization method. Fifteen design variables are used to optimize truss member sizes and feedback gain values. The goal is to reduce the total mass of the structure and the vibration control system while satisfying constraints on vibration decay rate. Incorporating the nonnegligible mass of actuators causes an essential coupling between structural design variables and control design variables. The solution of the demonstration problem is an important step toward a comprehensive preliminary design capability for structures and control systems. Use of global sensitivity equations helps solve optimization problems that have a large number of design variables and a high degree of coupling between disciplines.

  6. Global analysis of ligand sensitivity of estrogen inducible and suppressible genes in MCF7/BUS breast cancer cells by DNA microarray

    PubMed Central

    Coser, Kathryn R.; Chesnes, Jessica; Hur, Jingyung; Ray, Sandip; Isselbacher, Kurt J.; Shioda, Toshi

    2003-01-01

    To obtain comprehensive information on 17β-estradiol (E2) sensitivity of genes that are inducible or suppressible by this hormone, we designed a method that determines ligand sensitivities of large numbers of genes by using DNA microarray and a set of simple Perl computer scripts implementing the standard metric statistics. We used it to characterize effects of low (0–100 pM) concentrations of E2 on the transcriptome profile of MCF7/BUS human breast cancer cells, whose E2 dose-dependent growth curve saturated with 100 pM E2. Evaluation of changes in mRNA expression for all genes covered by the DNA microarray indicated that, at a very low concentration (10 pM), E2 suppressed ≈3–5 times larger numbers of genes than it induced, whereas at higher concentrations (30–100 pM) it induced ≈1.5–2 times more genes than it suppressed. Using clearly defined statistical criteria, E2-inducible genes were categorized into several classes based on their E2 sensitivities. This approach of hormone sensitivity analysis revealed that expression of two previously reported E2-inducible autocrine growth factors, transforming growth factor α and stromal cell-derived factor 1, was not affected by 100 pM and lower concentrations of E2 but strongly enhanced by 10 nM E2, which was far higher than the concentration that saturated the E2 dose-dependent growth curve of MCF7/BUS cells. These observations suggested that biological actions of E2 are derived from expression of multiple genes whose E2 sensitivities differ significantly and, hence, depend on the E2 concentration, especially when it is lower than the saturating level, emphasizing the importance of characterizing the ligand dosedependent aspects of E2 actions. PMID:14610279

  7. Global analysis of ligand sensitivity of estrogen inducible and suppressible genes in MCF7/BUS breast cancer cells by DNA microarray.

    PubMed

    Coser, Kathryn R; Chesnes, Jessica; Hur, Jingyung; Ray, Sandip; Isselbacher, Kurt J; Shioda, Toshi

    2003-11-25

    To obtain comprehensive information on 17beta-estradiol (E2) sensitivity of genes that are inducible or suppressible by this hormone, we designed a method that determines ligand sensitivities of large numbers of genes by using DNA microarray and a set of simple Perl computer scripts implementing the standard metric statistics. We used it to characterize effects of low (0-100 pM) concentrations of E2 on the transcriptome profile of MCF7/BUS human breast cancer cells, whose E2 dose-dependent growth curve saturated with 100 pM E2. Evaluation of changes in mRNA expression for all genes covered by the DNA microarray indicated that, at a very low concentration (10 pM), E2 suppressed approximately 3-5 times larger numbers of genes than it induced, whereas at higher concentrations (30-100 pM) it induced approximately 1.5-2 times more genes than it suppressed. Using clearly defined statistical criteria, E2-inducible genes were categorized into several classes based on their E2 sensitivities. This approach of hormone sensitivity analysis revealed that expression of two previously reported E2-inducible autocrine growth factors, transforming growth factor alpha and stromal cell-derived factor 1, was not affected by 100 pM and lower concentrations of E2 but strongly enhanced by 10 nM E2, which was far higher than the concentration that saturated the E2 dose-dependent growth curve of MCF7/BUS cells. These observations suggested that biological actions of E2 are derived from expression of multiple genes whose E2 sensitivities differ significantly and, hence, depend on the E2 concentration, especially when it is lower than the saturating level, emphasizing the importance of characterizing the ligand dose-dependent aspects of E2 actions. PMID:14610279

  8. Extended Forward Sensitivity Analysis for Uncertainty Quantification

    SciTech Connect

    Haihua Zhao; Vincent A. Mousseau

    2011-09-01

    Verification and validation (V&V) are playing more important roles to quantify uncertainties and realize high fidelity simulations in engineering system analyses, such as transients happened in a complex nuclear reactor system. Traditional V&V in the reactor system analysis focused more on the validation part or did not differentiate verification and validation. The traditional approach to uncertainty quantification is based on a 'black box' approach. The simulation tool is treated as an unknown signal generator, a distribution of inputs according to assumed probability density functions is sent in and the distribution of the outputs is measured and correlated back to the original input distribution. The 'black box' method mixes numerical errors with all other uncertainties. It is also not efficient to perform sensitivity analysis. Contrary to the 'black box' method, a more efficient sensitivity approach can take advantage of intimate knowledge of the simulation code. In these types of approaches equations for the propagation of uncertainty are constructed and the sensitivities are directly solved for as variables in the simulation. This paper presents the forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical parameters, the simulation is allowed

  9. Global genetic analysis.

    PubMed

    Elahi, Elahe; Kumm, Jochen; Ronaghi, Mostafa

    2004-01-31

    The introduction of molecular markers in genetic analysis has revolutionized medicine. These molecular markers are genetic variations associated with a predisposition to common diseases and individual variations in drug responses. Identification and genotyping a vast number of genetic polymorphisms in large populations are increasingly important for disease gene identification, pharmacogenetics and population-based studies. Among variations being analyzed, single nucleotide polymorphisms seem to be most useful in large-scale genetic analysis. This review discusses approaches for genetic analysis, use of different markers, and emerging technologies for large-scale genetic analysis where millions of genotyping need to be performed. PMID:14761299

  10. Comparative Sensitivity Analysis of Muscle Activation Dynamics.

    PubMed

    Rockenfeller, Robert; Günther, Michael; Schmitt, Syn; Götz, Thomas

    2015-01-01

    We mathematically compared two models of mammalian striated muscle activation dynamics proposed by Hatze and Zajac. Both models are representative for a broad variety of biomechanical models formulated as ordinary differential equations (ODEs). These models incorporate parameters that directly represent known physiological properties. Other parameters have been introduced to reproduce empirical observations. We used sensitivity analysis to investigate the influence of model parameters on the ODE solutions. In addition, we expanded an existing approach to treating initial conditions as parameters and to calculating second-order sensitivities. Furthermore, we used a global sensitivity analysis approach to include finite ranges of parameter values. Hence, a theoretician striving for model reduction could use the method for identifying particularly low sensitivities to detect superfluous parameters. An experimenter could use it for identifying particularly high sensitivities to improve parameter estimation. Hatze's nonlinear model incorporates some parameters to which activation dynamics is clearly more sensitive than to any parameter in Zajac's linear model. Other than Zajac's model, Hatze's model can, however, reproduce measured shifts in optimal muscle length with varied muscle activity. Accordingly we extracted a specific parameter set for Hatze's model that combines best with a particular muscle force-length relation. PMID:26417379

  11. Comparative Sensitivity Analysis of Muscle Activation Dynamics

    PubMed Central

    Rockenfeller, Robert; Günther, Michael; Schmitt, Syn; Götz, Thomas

    2015-01-01

    We mathematically compared two models of mammalian striated muscle activation dynamics proposed by Hatze and Zajac. Both models are representative for a broad variety of biomechanical models formulated as ordinary differential equations (ODEs). These models incorporate parameters that directly represent known physiological properties. Other parameters have been introduced to reproduce empirical observations. We used sensitivity analysis to investigate the influence of model parameters on the ODE solutions. In addition, we expanded an existing approach to treating initial conditions as parameters and to calculating second-order sensitivities. Furthermore, we used a global sensitivity analysis approach to include finite ranges of parameter values. Hence, a theoretician striving for model reduction could use the method for identifying particularly low sensitivities to detect superfluous parameters. An experimenter could use it for identifying particularly high sensitivities to improve parameter estimation. Hatze's nonlinear model incorporates some parameters to which activation dynamics is clearly more sensitive than to any parameter in Zajac's linear model. Other than Zajac's model, Hatze's model can, however, reproduce measured shifts in optimal muscle length with varied muscle activity. Accordingly we extracted a specific parameter set for Hatze's model that combines best with a particular muscle force-length relation. PMID:26417379

  12. Involute composite design evaluation using global design sensitivity derivatives

    NASA Technical Reports Server (NTRS)

    Hart, J. K.; Stanton, E. L.

    1989-01-01

    An optimization capability for involute structures has been developed. Its key feature is the use of global material geometry variables which are so chosen that all combinations of design variables within a set of lower and upper bounds correspond to manufacturable designs. A further advantage of global variables is that their number does not increase with increasing mesh density. The accuracy of the sensitivity derivatives has been verified both through finite difference tests and through the successful use of the derivatives by an optimizer. The state of the art in composite design today is still marked by point design algorithms linked together using ad hoc methods not directly related to a manufacturing procedure. The global design sensitivity approach presented here for involutes can be applied to filament wound shells and other composite constructions using material form features peculiar to each construction. The present involute optimization technology is being applied to the Space Shuttle SRM nozzle boot ring redesigns by PDA Engineering.

  13. Connecting Local and Global Sensitivities in a Mathematical Model for Wound Healing.

    PubMed

    Krishna, Nitin A; Pennington, Hannah M; Coppola, Canaan D; Eisenberg, Marisa C; Schugart, Richard C

    2015-12-01

    The process of wound healing is governed by complex interactions between proteins and the extracellular matrix, involving a range of signaling pathways. This study aimed to formulate, quantify, and analyze a mathematical model describing interactions among matrix metalloproteinases (MMP-1), their inhibitors (TIMP-1), and extracellular matrix in the healing of a diabetic foot ulcer. De-identified patient data for modeling were taken from Muller et al. (Diabet Med 25(4):419-426, 2008), a research outcome that collected average physiological data for two patient subgroups: "good healers" and "poor healers," where classification was based on rate of ulcer healing. Model parameters for the two patient subgroups were estimated using least squares. The model and parameter values were analyzed by conducting a steady-state analysis and both global and local sensitivity analyses. The global sensitivity analysis was performed using Latin hypercube sampling and partial rank correlation analysis, while local analysis was conducted through a classical sensitivity analysis followed by an SVD-QR subset selection. We developed a "local-to-global" analysis to compare the results of the sensitivity analyses. Our results show that the sensitivities of certain parameters are highly dependent on the size of the parameter space, suggesting that identifying physiological bounds may be critical in defining the sensitivities. PMID:26597096

  14. An analysis of sensitivity tests

    SciTech Connect

    Neyer, B.T.

    1992-03-06

    A new method of analyzing sensitivity tests is proposed. It uses the Likelihood Ratio Test to compute regions of arbitrary confidence. It can calculate confidence regions for the parameters of the distribution (e.g., the mean, {mu}, and the standard deviation, {sigma}) as well as various percentiles. Unlike presently used methods, such as those based on asymptotic analysis, it can analyze the results of all sensitivity tests, and it does not significantly underestimate the size of the confidence regions. The main disadvantage of this method is that it requires much more computation to calculate the confidence regions. However, these calculations can be easily and quickly performed on most computers.

  15. 21st century runoff sensitivities of major global river basins

    NASA Astrophysics Data System (ADS)

    Tang, Qiuhong; Lettenmaier, Dennis P.

    2012-03-01

    River runoff is a key index of renewable water resources which affect almost all human and natural systems. Any substantial change in runoff will therefore have serious social, environmental, and ecological consequences. We estimate the runoff response to global mean temperature change implied by the climate change experiments generated for the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR4). In contrast to previous studies, we estimate the runoff sensitivity using global mean temperature change as an index of anthropogenic climate changes in temperature and precipitation, with the rationale that this removes the dependence on emissions scenarios. Our results show that the runoff sensitivity implied by the IPCC experiments is relatively stable across emissions scenarios and global mean temperature increments, but varies substantially across models with the exception of the high-latitudes and currently arid or semi-arid areas. The runoff sensitivities are slightly higher at 0.5°C warming than for larger amounts of warming. The estimated ratio of runoff change to (local) precipitation change (runoff elasticity) ranges from about one to three, and the runoff temperature sensitivity (change in runoff per degree C of local temperature increase) ranges from decreases of about 2 to 6% over most basins in North America and the middle and high latitudes of Eurasia.

  16. Computational methods for global/local analysis

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.; Mccleary, Susan L.; Aminpour, Mohammad A.; Knight, Norman F., Jr.

    1992-01-01

    Computational methods for global/local analysis of structures which include both uncoupled and coupled methods are described. In addition, global/local analysis methodology for automatic refinement of incompatible global and local finite element models is developed. Representative structural analysis problems are presented to demonstrate the global/local analysis methods.

  17. Diagnostic Analysis of Middle Atmosphere Climate Sensitivity

    NASA Astrophysics Data System (ADS)

    Zhu, X.; Cai, M.; Swartz, W. H.; Coy, L.; Yee, J.; Talaat, E. R.

    2013-12-01

    Both the middle atmosphere climate sensitivity associated with the cooling trend and its uncertainty due to a complex system of drivers increase with altitude. Furthermore, the combined effect of middle atmosphere cooling due to long-lived greenhouse gases and ozone is also associated with natural climate variations due to solar activity. To understand and predict climate change from a global perspective, we use the recently developed climate feedback-response analysis method (CFRAM) to identify and isolate the signals from the external forcing and from different feedback processes in the middle atmosphere climate system. By use of the JHU/APL middle atmosphere radiation algorithm, the CFRAM is applied to the model output fields of the high-altitude GEOS-5 climate model in the middle atmosphere to delineate the individual contributions of radiative forcing to middle atmosphere climate sensitivity.

  18. Sensitivity and Uncertainty Analysis Shell

    Energy Science and Technology Software Center (ESTSC)

    1999-04-20

    SUNS (Sensitivity and Uncertainty Analysis Shell) is a 32-bit application that runs under Windows 95/98 and Windows NT. It is designed to aid in statistical analyses for a broad range of applications. The class of problems for which SUNS is suitable is generally defined by two requirements: 1. A computer code is developed or acquired that models some processes for which input is uncertain and the user is interested in statistical analysis of the outputmore » of that code. 2. The statistical analysis of interest can be accomplished using the Monte Carlo analysis. The implementation then requires that the user identify which input to the process model is to be manipulated for statistical analysis. With this information, the changes required to loosely couple SUNS with the process model can be completed. SUNS is then used to generate the required statistical sample and the user-supplied process model analyses the sample. The SUNS post processor displays statistical results from any existing file that contains sampled input and output values.« less

  19. Sensitivity of regional climate to global temperature and forcing

    NASA Astrophysics Data System (ADS)

    Tebaldi, Claudia; O'Neill, Brian; Lamarque, Jean-François

    2015-07-01

    The sensitivity of regional climate to global average radiative forcing and temperature change is important for setting global climate policy targets and designing scenarios. Setting effective policy targets requires an understanding of the consequences exceeding them, even by small amounts, and the effective design of sets of scenarios requires the knowledge of how different emissions, concentrations, or forcing need to be in order to produce substantial differences in climate outcomes. Using an extensive database of climate model simulations, we quantify how differences in global average quantities relate to differences in both the spatial extent and magnitude of climate outcomes at regional (250-1250 km) scales. We show that differences of about 0.3 °C in global average temperature are required to generate statistically significant changes in regional annual average temperature over more than half of the Earth’s land surface. A global difference of 0.8 °C is necessary to produce regional warming over half the land surface that is not only significant but reaches at least 1 °C. As much as 2.5 to 3 °C is required for a statistically significant change in regional annual average precipitation that is equally pervasive. Global average temperature change provides a better metric than radiative forcing for indicating differences in regional climate outcomes due to the path dependency of the effects of radiative forcing. For example, a difference in radiative forcing of 0.5 W m-2 can produce statistically significant differences in regional temperature over an area that ranges between 30% and 85% of the land surface, depending on the forcing pathway.

  20. Global thermohaline circulation. Part 1: Sensitivity to atmospheric moisture transport

    SciTech Connect

    Wang, X.; Stone, P.H.; Marotzke, J.

    1999-01-01

    A global ocean general circulation model of idealized geometry, combined with an atmospheric model based on observed transports of heat, momentum, and moisture, is used to explore the sensitivity of the global conveyor belt circulation to the surface freshwater fluxes, in particular the effects of meridional atmospheric moisture transports. The numerical results indicate that the equilibrium strength of the North Atlantic Deep Water (NADW) formation increases as the global freshwater transports increase. However, the global deep water formation--that is, the sum of the NADW and the Southern Ocean Deep Water formation rates--is relatively insensitive to changes of the freshwater flux. Perturbations to the meridional moisture transports of each hemisphere identify equatorially asymmetric effects of the freshwater fluxes. The results are consistent with box model results that the equilibrium NADW formation is primarily controlled by the magnitude of the Southern Hemisphere freshwater flux. However, the results show that the Northern Hemisphere freshwater flux has a strong impact on the transient behavior of the North Atlantic overturning. Increasing this flux leads to a collapse of the conveyor belt circulation, but the collapse is delayed if the Southern Hemisphere flux also increases. The perturbation experiments also illustrate that the rapidity of collapse is affected by random fluctuations in the wind stress field.

  1. Using Dynamic Sensitivity Analysis to Assess Testability

    NASA Technical Reports Server (NTRS)

    Voas, Jeffrey; Morell, Larry; Miller, Keith

    1990-01-01

    This paper discusses sensitivity analysis and its relationship to random black box testing. Sensitivity analysis estimates the impact that a programming fault at a particular location would have on the program's input/output behavior. Locations that are relatively \\"insensitive" to faults can render random black box testing unlikely to uncover programming faults. Therefore, sensitivity analysis gives new insight when interpreting random black box testing results. Although sensitivity analysis is computationally intensive, it requires no oracle and no human intervention.

  2. Use of global sensitivity analysis in quantitative microbial risk assessment: application to the evaluation of a biological time temperature integrator as a quality and safety indicator for cold smoked salmon.

    PubMed

    Ellouze, M; Gauchi, J-P; Augustin, J-C

    2011-06-01

    The aim of this study was to apply a global sensitivity analysis (SA) method in model simplification and to evaluate (eO)®, a biological Time Temperature Integrator (TTI) as a quality and safety indicator for cold smoked salmon (CSS). Models were thus developed to predict the evolutions of Listeria monocytogenes and the indigenous food flora in CSS and to predict TTIs endpoint. A global SA was then applied on the three models to identify the less important factors and simplify the models accordingly. Results showed that the subset of the most important factors of the three models was mainly composed of the durations and temperatures of two chill chain links, out of the control of the manufacturers: the domestic refrigerator and the retail/cabinet links. Then, the simplified versions of the three models were run with 10(4) time temperature profiles representing the variability associated to the microbial behavior, to the TTIs evolution and to the French chill chain characteristics. The results were used to assess the distributions of the microbial contaminations obtained at the TTI endpoint and at the end of the simulated profiles and proved that, in the case of poor storage conditions, the TTI use could reduce the number of unacceptable foods by 50%. PMID:21511136

  3. Stiff DAE integrator with sensitivity analysis capabilities

    Energy Science and Technology Software Center (ESTSC)

    2007-11-26

    IDAS is a general purpose (serial and parallel) solver for differential equation (ODE) systems with senstivity analysis capabilities. It provides both forward and adjoint sensitivity analysis options.

  4. Point Source Location Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Cox, J. Allen

    1986-11-01

    This paper presents the results of an analysis of point source location accuracy and sensitivity as a function of focal plane geometry, optical blur spot, and location algorithm. Five specific blur spots are treated: gaussian, diffraction-limited circular aperture with and without central obscuration (obscured and clear bessinc, respectively), diffraction-limited rectangular aperture, and a pill box distribution. For each blur spot, location accuracies are calculated for square, rectangular, and hexagonal detector shapes of equal area. The rectangular detectors are arranged on a hexagonal lattice. The two location algorithms consist of standard and generalized centroid techniques. Hexagonal detector arrays are shown to give the best performance under a wide range of conditions.

  5. LCA data quality: sensitivity and uncertainty analysis.

    PubMed

    Guo, M; Murphy, R J

    2012-10-01

    Life cycle assessment (LCA) data quality issues were investigated by using case studies on products from starch-polyvinyl alcohol based biopolymers and petrochemical alternatives. The time horizon chosen for the characterization models was shown to be an important sensitive parameter for the environmental profiles of all the polymers. In the global warming potential and the toxicity potential categories the comparison between biopolymers and petrochemical counterparts altered as the time horizon extended from 20 years to infinite time. These case studies demonstrated that the use of a single time horizon provide only one perspective on the LCA outcomes which could introduce an inadvertent bias into LCA outcomes especially in toxicity impact categories and thus dynamic LCA characterization models with varying time horizons are recommended as a measure of the robustness for LCAs especially comparative assessments. This study also presents an approach to integrate statistical methods into LCA models for analyzing uncertainty in industrial and computer-simulated datasets. We calibrated probabilities for the LCA outcomes for biopolymer products arising from uncertainty in the inventory and from data variation characteristics this has enabled assigning confidence to the LCIA outcomes in specific impact categories for the biopolymer vs. petrochemical polymer comparisons undertaken. Uncertainty combined with the sensitivity analysis carried out in this study has led to a transparent increase in confidence in the LCA findings. We conclude that LCAs lacking explicit interpretation of the degree of uncertainty and sensitivities are of limited value as robust evidence for decision making or comparative assertions. PMID:22854094

  6. Sensitivity of Local Temperature CDFs to Global Climate Change

    NASA Astrophysics Data System (ADS)

    Stainforth, D.; Chapman, S. C.; Watkins, N. W.

    2011-12-01

    The sensitivity of climate to increasing atmospheric greenhouse gases at the global scale has been much studied [Knutti and Hegerl 2008, and references therein]. Scientific information to support climate change adaptation activities, however, is often sought at regional or local scales; the scales on which most adaptation decisions are made. Information on these scales is most often based on simulations of complex climate models [Murphy et al. 2009, Tebaldi et al. 2005] and have questionable reliability [Stainforth et al., 2007]. Rather than using data derived or obtained from models we focus on observational timeseries to evaluate the sensitivity of different parts of the local climatic distribution. Such an approach has many advantages: it avoids issues relating to model imperfections [Stainforth et al. 2007], it can be focused on decision relevant thresholds [e.g. Porter and Semenov, 2005], and it inherently integrates information relating to local climatic influences. Taking a timeseries of local daily temperatures for various locations across the United Kingdom we extract the changing cumulative distribution functions over time. We present a simple mathematical deconstruction of how two different observations from two different time periods can be assigned to the combination of natural variability and/or the consequences of climate change. Using this deconstruction we analyse the changing shape of the distributions and thus the sensitivity of different quartiles of the distribution. These sensitivities are found to be both regionally consistent and geographically varying across the United Kingdom; as one would expect given the different influences on local climate between, say, Western Scotland and South East England. We nevertheless find a common pattern of increased sensitivity in the 60th to 80th percentiles; above the mean but below the greatest extremes. The method has the potential to be applied to many other variables in addition to temperature and to

  7. Sensitivity of global model prediction to initial state uncertainty

    NASA Astrophysics Data System (ADS)

    Miguez-Macho, Gonzalo

    The sensitivity of global and North American forecasts to uncertainties in the initial conditions is studied. The Utah Global Model is initialized with reanalysis data sets obtained from the National Centers for Environmental Prediction (NCEP) and the European Centre for Medium- Range Weather Forecasts (ECMWF). The differences between these analyses provide an estimate of initial uncertainty. The influence of certain scales of the initial uncertainty is tested in experiments with initial data change from NCEP to ECMWF reanalysis in a selected spectral band. Experiments are also done to determine the benefits of targeting local regions for forecast errors over North America. In these tests, NCEP initial data are replaced by ECMWF data in the considered region. The accuracy of predictions with initial data from either reanalysis only differs over the mid-latitudes of the Southern Hemisphere, where ECMWF initialized forecasts have somewhat greater skill. Results from the spectral experiments indicate that most of this benefit is explained by initial differences of the longwave components (wavenumbers 0-15). Approximately 67% of the 120-h global forecast difference produced by changing initial data from ECMWF to NCEP reanalyses is due to initial changes only in wavenumbers 0-15, and more than 85% of this difference is produced by initial changes in wavenumbers 0-20. The results suggest that large-scale errors of the initial state may play a more prominent role than suggested in some singular vector analyses, and favor global observational coverage to resolve the long waves. Results from the regional targeting experiments indicate that for forecast errors over North America, a systematic benefit comes only when the ``targeted'' region includes most of the north Pacific, pointing again at large scale errors as being prominent, even for midrange predictions over a local area.

  8. Identification of the significant factors in food safety using global sensitivity analysis and the accept-and-reject algorithm: application to the cold chain of ham.

    PubMed

    Duret, Steven; Guillier, Laurent; Hoang, Hong-Minh; Flick, Denis; Laguerre, Onrawee

    2014-06-16

    Deterministic models describing heat transfer and microbial growth in the cold chain are widely studied. However, it is difficult to apply them in practice because of several variable parameters in the logistic supply chain (e.g., ambient temperature varying due to season and product residence time in refrigeration equipment), the product's characteristics (e.g., pH and water activity) and the microbial characteristics (e.g., initial microbial load and lag time). This variability can lead to different bacterial growth rates in food products and has to be considered to properly predict the consumer's exposure and identify the key parameters of the cold chain. This study proposes a new approach that combines deterministic (heat transfer) and stochastic (Monte Carlo) modeling to account for the variability in the logistic supply chain and the product's characteristics. The model generates a realistic time-temperature product history , contrary to existing modeling whose describe time-temperature profile Contrary to existing approaches that use directly a time-temperature profile, the proposed model predicts product temperature evolution from the thermostat setting and the ambient temperature. The developed methodology was applied to the cold chain of cooked ham including, the display cabinet, transport by the consumer and the domestic refrigerator, to predict the evolution of state variables, such as the temperature and the growth of Listeria monocytogenes. The impacts of the input factors were calculated and ranked. It was found that the product's time-temperature history and the initial contamination level are the main causes of consumers' exposure. Then, a refined analysis was applied, revealing the importance of consumer behaviors on Listeria monocytogenes exposure. PMID:24786551

  9. Sensitivity of global wildfire occurrences to various factors in the context of global change

    NASA Astrophysics Data System (ADS)

    Huang, Yaoxian; Wu, Shiliang; Kaplan, Jed O.

    2015-11-01

    The occurrence of wildfires is very sensitive to fire meteorology, vegetation type and coverage. We investigate the potential impacts of global change (including changes in climate, land use/land cover, and population density) on wildfire frequencies over the period of 2000-2050. We account for the impacts associated with the changes in fire meteorology (such as temperature, precipitation, and relative humidity), vegetation density, as well as lightning and anthropogenic ignitions. Fire frequencies under the 2050 conditions are projected to increase by approximately 27% globally relative to the 2000 levels. Significant increases in fire occurrence are calculated over the Amazon area, Australia and Central Russia, while Southeast Africa shows a large decreasing trend due to significant increases in land use and population. Changes in fire meteorology driven by 2000-2050 climate change are found to increase the global annual total fires by around 19%. Modest increases (∼4%) in fire frequency at tropical regions are calculated in response to climate-driven changes in lightning activities, relative to the present-day levels. Changes in land cover by 2050 driven by climate change and increasing CO2 fertilization are expected to increase the global wildfire occurrences by 15% relative to the 2000 conditions while the 2000-2050 anthropogenic land use changes show little effects on global wildfire frequency. The 2000-2050 changes in global population are projected to reduce the total wildfires by about 7%. In general, changes in future fire meteorology plays the most important role in enhancing the future global wildfires, followed by land cover, lightning activities and land use while changes in population density exhibits the opposite effects during the period of 2000-2050.

  10. Global Soil Moisture Analysis at DWD

    NASA Astrophysics Data System (ADS)

    Lange, M.

    2012-04-01

    Small errors in the daily forecast of precipitation, evaporation and runoff accumulate to uncertainties of soil water content and lead to systematic biases of temperature and humidity profiles in the boundary layer if no corrections are applied. A new soil moisture assimilation scheme has been developed for the global GME model and runs operationally since March 2011. As many other variational schemes implemented at NWP centers (e.g. Canadian Met Service, DWD, ECMWF,, Meteo France) the scheme is based on minimisation of screen level forecast errors by adjusting the soil water content implicitly correcting the partitioning of available energy into latent and sensible heat. The original method proposed by Mahfouf (1991) and described in Hess, 2001 requires at least two additional model forecast runs to calculate the gradient of the cost function i.e. the sensitivity dT2m/dwb with T2m as 2m temperature and wb as the soil water content of the respective top and bottom soil layers. To overcome this computational costly approach in the new scheme the sensitivity of screen level temperature on soil moisture changes is parameterized with derivatives of analytical relations for transpiration from vegetation and bare soil evaporation as motivated by Jacobs and De Bruin (1992). The comparison of both methods shows high correlation of the temperature sensitivity that justifies the approximation. The method will be described in detail and verification results will be presented to demonstrate the impact of soil moisture analysis in GME. Hess, R. 2001: Assimilation of screen-level observations by variational soil moisture analysis. Meteorol. Atmos. Phys. 77, 145-154. Jacobs, C.M.M. and H.A.R. De Bruin, 1992: The Sensitivity of Regional Transpiration to Land-Surface Characteristics: Significance of Feedback. J. Clim. 5, 683-698. Mahfouf, J-F. 1991. Analysis of soil moisture from near-surface parameters: A feasibility study. J. Appl. Meteorol. 30: 1534-1547.

  11. Global analysis of intraplate basins

    NASA Astrophysics Data System (ADS)

    Heine, C.; Mueller, D. R.; Dyksterhuis, S.

    2005-12-01

    Broad intraplate sedimentary basins often show a mismatch of lithospheric extension factors compared to those inferred from sediment thickness and subsidence modelling, not conforming to the current understanding of rift basin evolution. Mostly, these basins are underlain by a very heterogeneous and structurally complex basement which has been formed as a product of Phanerozoic continent-continent or terrane/arc-continent collision and is usually referred to as being accretionary. Most likely, the basin-underlying substrate is one of the key factors controlling the style of extension. In order to investigate and model the geodynamic framework and mechanics controlling formation and evolution of these long-term depositional regions, we have been analysing a global set of more than 200 basins using various remotely sensed geophysical data sets and relational geospatial databases. We have compared elevation, crustal and sediment thickness, heatflow, crustal structure, basin ages and -geometries with computed differential beta, anomalous tectonic subsidence, and differential extension factor grids for these basins. The crust/mantle interactions in the basin regions are investigated using plate tectonic reconstructions in a mantle convection framework for the last 160 Ma. Characteristic parameters and patterns derived from this global analysis are then used to generate a classification scheme, to estimate the misfit between models derived from either crustal thinning or sediment thickness, and as input for extension models using particle-in-cell finite element codes. Basins with high differential extension values include the ``classical'' intraplate-basins, like the Michigan Basin in North America, the Zaire Basin in Africa, basins of the Arabian Penisula, and the West Siberian Basin. According to our global analysis so far, these basins show, that with increasing basin age, the amount of crustal extension vs. the extension values estimated from sediment thickness

  12. A review of sensitivity analysis techniques

    SciTech Connect

    Hamby, D.M.

    1993-12-31

    Mathematical models are utilized to approximate various highly complex engineering, physical, environmental, social, and economic phenomena. Model parameters exerting the most influence on model results are identified through a {open_quotes}sensitivity analysis.{close_quotes} A comprehensive review is presented of more than a dozen sensitivity analysis methods. The most fundamental of sensitivity techniques utilizes partial differentiation whereas the simplest approach requires varying parameter values one-at-a-time. Correlation analysis is used to determine relationships between independent and dependent variables. Regression analysis provides the most comprehensive sensitivity measure and is commonly utilized to build response surfaces that approximate complex models.

  13. Emulation of a complex global aerosol model to quantify sensitivity to uncertain parameters

    NASA Astrophysics Data System (ADS)

    Lee, L. A.; Carslaw, K. S.; Pringle, K. J.; Mann, G. W.; Spracklen, D. V.

    2011-12-01

    Sensitivity analysis of atmospheric models is necessary to identify the processes that lead to uncertainty in model predictions, to help understand model diversity through comparison of driving processes, and to prioritise research. Assessing the effect of parameter uncertainty in complex models is challenging and often limited by CPU constraints. Here we present a cost-effective application of variance-based sensitivity analysis to quantify the sensitivity of a 3-D global aerosol model to uncertain parameters. A Gaussian process emulator is used to estimate the model output across multi-dimensional parameter space, using information from a small number of model runs at points chosen using a Latin hypercube space-filling design. Gaussian process emulation is a Bayesian approach that uses information from the model runs along with some prior assumptions about the model behaviour to predict model output everywhere in the uncertainty space. We use the Gaussian process emulator to calculate the percentage of expected output variance explained by uncertainty in global aerosol model parameters and their interactions. To demonstrate the technique, we show examples of cloud condensation nuclei (CCN) sensitivity to 8 model parameters in polluted and remote marine environments as a function of altitude. In the polluted environment 95 % of the variance of CCN concentration is described by uncertainty in the 8 parameters (excluding their interaction effects) and is dominated by the uncertainty in the sulphur emissions, which explains 80 % of the variance. However, in the remote region parameter interaction effects become important, accounting for up to 40 % of the total variance. Some parameters are shown to have a negligible individual effect but a substantial interaction effect. Such sensitivities would not be detected in the commonly used single parameter perturbation experiments, which would therefore underpredict total uncertainty. Gaussian process emulation is shown to

  14. Emulation of a complex global aerosol model to quantify sensitivity to uncertain parameters

    NASA Astrophysics Data System (ADS)

    Lee, L. A.; Carslaw, K. S.; Pringle, K.; Mann, G. W.; Spracklen, D. V.

    2011-07-01

    Sensitivity analysis of atmospheric models is necessary to identify the processes that lead to uncertainty in model predictions, to help understand model diversity, and to prioritise research. Assessing the effect of parameter uncertainty in complex models is challenging and often limited by CPU constraints. Here we present a cost-effective application of variance-based sensitivity analysis to quantify the sensitivity of a 3-D global aerosol model to uncertain parameters. A Gaussian process emulator is used to estimate the model output across multi-dimensional parameter space using information from a small number of model runs at points chosen using a Latin hypercube space-filling design. Gaussian process emulation is a Bayesian approach that uses information from the model runs along with some prior assumptions about the model behaviour to predict model output everywhere in the uncertainty space. We use the Gaussian process emulator to calculate the percentage of expected output variance explained by uncertainty in global aerosol model parameters and their interactions. To demonstrate the technique, we show examples of cloud condensation nuclei (CCN) sensitivity to 8 model parameters in polluted and remote marine environments as a function of altitude. In the polluted environment 95 % of the variance of CCN concentration is described by uncertainty in the 8 parameters (excluding their interaction effects) and is dominated by the uncertainty in the sulphur emissions, which explains 80 % of the variance. However, in the remote region parameter interaction effects become important, accounting for up to 40 % of the total variance. Some parameters are shown to have a negligible individual effect but a substantial interaction effect. Such sensitivities would not be detected in the commonly used single parameter perturbation experiments, which would therefore underpredict total uncertainty. Gaussian process emulation is shown to be an efficient and useful technique for

  15. Design sensitivity analysis of nonlinear structural response

    NASA Technical Reports Server (NTRS)

    Cardoso, J. B.; Arora, J. S.

    1987-01-01

    A unified theory is described of design sensitivity analysis of linear and nonlinear structures for shape, nonshape and material selection problems. The concepts of reference volume and adjoint structure are used to develop the unified viewpoint. A general formula for design sensitivity analysis is derived. Simple analytical linear and nonlinear examples are used to interpret various terms of the formula and demonstrate its use.

  16. Sensitivity of flood events to global climate change

    NASA Astrophysics Data System (ADS)

    Panagoulia, Dionysia; Dimou, George

    1997-04-01

    The sensitivity of Acheloos river flood events at the outfall of the mountainous Mesochora catchment in Central Greece was analysed under various scenarios of global climate change. The climate change pattern was simulated through a set of hypothetical and monthly GISS (Goddard Institute for Space Studies) scenarios of temperature increase coupled with precipitation changes. The daily outflow of the catchment, which is dominated by spring snowmelt runoff, was simulated by the coupling of snowmelt and soil moisture accounting models of the US National Weather Service River Forecast System. Two threshold levels were used to define a flood day—the double and triple long-term mean daily streamflow—and the flood parameters (occurrences, duration, magnitude, etc.) for these cases were determined. Despite the complicated response of flood events to temperature increase and threshold, both hypothetical and monthly GISS representations of climate change resulted in more and longer flood events for climates with increased precipitation. All climates yielded larger flood volumes and greater mean values of flood peaks with respect to precipitation increase. The lower threshold resulted in more and longer flood occurrences, as well as smaller flood volumes and peaks than those of the upper one. The combination of higher and frequent flood events could lead to greater risks of inudation and possible damage to structures. Furthermore, the winter swelling of the streamflow could increase erosion of the river bed and banks and hence modify the river profile.

  17. Recent developments in structural sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.; Adelman, Howard M.

    1988-01-01

    Recent developments are reviewed in two major areas of structural sensitivity analysis: sensitivity of static and transient response; and sensitivity of vibration and buckling eigenproblems. Recent developments from the standpoint of computational cost, accuracy, and ease of implementation are presented. In the area of static response, current interest is focused on sensitivity to shape variation and sensitivity of nonlinear response. Two general approaches are used for computing sensitivities: differentiation of the continuum equations followed by discretization, and the reverse approach of discretization followed by differentiation. It is shown that the choice of methods has important accuracy and implementation implications. In the area of eigenproblem sensitivity, there is a great deal of interest and significant progress in sensitivity of problems with repeated eigenvalues. In addition to reviewing recent contributions in this area, the paper raises the issue of differentiability and continuity associated with the occurrence of repeated eigenvalues.

  18. What Constitutes a "Good" Sensitivity Analysis? Elements and Tools for a Robust Sensitivity Analysis with Reduced Computational Cost

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin; Haghnegahdar, Amin

    2016-04-01

    Global sensitivity analysis (GSA) is a systems theoretic approach to characterizing the overall (average) sensitivity of one or more model responses across the factor space, by attributing the variability of those responses to different controlling (but uncertain) factors (e.g., model parameters, forcings, and boundary and initial conditions). GSA can be very helpful to improve the credibility and utility of Earth and Environmental System Models (EESMs), as these models are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. However, conventional approaches to GSA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we identify several important sensitivity-related characteristics of response surfaces that must be considered when investigating and interpreting the ''global sensitivity'' of a model response (e.g., a metric of model performance) to its parameters/factors. Accordingly, we present a new and general sensitivity and uncertainty analysis framework, Variogram Analysis of Response Surfaces (VARS), based on an analogy to 'variogram analysis', that characterizes a comprehensive spectrum of information on sensitivity. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices are contained within the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.

  19. Sensitivity Analysis for some Water Pollution Problem

    NASA Astrophysics Data System (ADS)

    Le Dimet, François-Xavier; Tran Thu, Ha; Hussaini, Yousuff

    2014-05-01

    Sensitivity Analysis for Some Water Pollution Problems Francois-Xavier Le Dimet1 & Tran Thu Ha2 & M. Yousuff Hussaini3 1Université de Grenoble, France, 2Vietnamese Academy of Sciences, 3 Florida State University Sensitivity analysis employs some response function and the variable with respect to which its sensitivity is evaluated. If the state of the system is retrieved through a variational data assimilation process, then the observation appears only in the Optimality System (OS). In many cases, observations have errors and it is important to estimate their impact. Therefore, sensitivity analysis has to be carried out on the OS, and in that sense sensitivity analysis is a second order property. The OS can be considered as a generalized model because it contains all the available information. This presentation proposes a method to carry out sensitivity analysis in general. The method is demonstrated with an application to water pollution problem. The model involves shallow waters equations and an equation for the pollutant concentration. These equations are discretized using a finite volume method. The response function depends on the pollutant source, and its sensitivity with respect to the source term of the pollutant is studied. Specifically, we consider: • Identification of unknown parameters, and • Identification of sources of pollution and sensitivity with respect to the sources. We also use a Singular Evolutive Interpolated Kalman Filter to study this problem. The presentation includes a comparison of the results from these two methods. .

  20. Extended Forward Sensitivity Analysis for Uncertainty Quantification

    SciTech Connect

    Haihua Zhao; Vincent A. Mousseau

    2008-09-01

    This report presents the forward sensitivity analysis method as a means for quantification of uncertainty in system analysis. The traditional approach to uncertainty quantification is based on a “black box” approach. The simulation tool is treated as an unknown signal generator, a distribution of inputs according to assumed probability density functions is sent in and the distribution of the outputs is measured and correlated back to the original input distribution. This approach requires large number of simulation runs and therefore has high computational cost. Contrary to the “black box” method, a more efficient sensitivity approach can take advantage of intimate knowledge of the simulation code. In this approach equations for the propagation of uncertainty are constructed and the sensitivity is solved for as variables in the same simulation. This “glass box” method can generate similar sensitivity information as the above “black box” approach with couples of runs to cover a large uncertainty region. Because only small numbers of runs are required, those runs can be done with a high accuracy in space and time ensuring that the uncertainty of the physical model is being measured and not simply the numerical error caused by the coarse discretization. In the forward sensitivity method, the model is differentiated with respect to each parameter to yield an additional system of the same size as the original one, the result of which is the solution sensitivity. The sensitivity of any output variable can then be directly obtained from these sensitivities by applying the chain rule of differentiation. We extend the forward sensitivity method to include time and spatial steps as special parameters so that the numerical errors can be quantified against other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty analysis. By knowing the relative sensitivity of time and space steps with other

  1. Coal Transportation Rate Sensitivity Analysis

    EIA Publications

    2005-01-01

    On December 21, 2004, the Surface Transportation Board (STB) requested that the Energy Information Administration (EIA) analyze the impact of changes in coal transportation rates on projected levels of electric power sector energy use and emissions. Specifically, the STB requested an analysis of changes in national and regional coal consumption and emissions resulting from adjustments in railroad transportation rates for Wyoming's Powder River Basin (PRB) coal using the National Energy Modeling System (NEMS). However, because NEMS operates at a relatively aggregate regional level and does not represent the costs of transporting coal over specific rail lines, this analysis reports on the impacts of interregional changes in transportation rates from those used in the Annual Energy Outlook 2005 (AEO2005) reference case.

  2. Increased sensitivity to transient global ischemia in aging rat brain.

    PubMed

    Xu, Kui; Sun, Xiaoyan; Puchowicz, Michelle A; LaManna, Joseph C

    2007-01-01

    Transient global brain ischemia induced by cardiac arrest and resuscitation (CAR) results in reperfusion injury associated with oxidative stress. Oxidative stress is known to produce delayed selective neuronal cell loss and impairment of brainstem function, leading to post-resuscitation mortality. Levels of 4-hydroxy-2-nonenal (HNE) modified protein adducts, a marker of oxidative stress, was found to be elevated after CAR in rat brain. In this study we investigated the effects of an antioxidant, alpha-phenyl-tert-butyl-nitrone (PBN) on the recovery following CAR in the aged rat brain. Male Fischer 344 rats (6, 12 and 24-month old) underwent 7-minute cardiac arrest before resuscitation. Brainstem function was assessed by hypoxic ventilatory response (HVR) and HNE-adducts were measured by western blot analysis. Our data showed that in the 24-month old rats, overall survival rate, hippocampal CAl neuronal counts and HVR were significantly reduced compared to the younger rats. With PBN treatment, the recovery was improved in the aged rat brain, which was consistent with reduced HNE adducts in brain following CAR. Our data suggest that aged rats are more vulnerable to oxidative stress insult and treatment with PBN improves the outcome following reperfusion injury. The mechanism of action is most likely through the scavenging of reactive oxygen species resulting in reduced lipid peroxidation. PMID:17727265

  3. Sensitivity Analysis of the Static Aeroelastic Response of a Wing

    NASA Technical Reports Server (NTRS)

    Eldred, Lloyd B.

    1993-01-01

    A technique to obtain the sensitivity of the static aeroelastic response of a three dimensional wing model is designed and implemented. The formulation is quite general and accepts any aerodynamic and structural analysis capability. A program to combine the discipline level, or local, sensitivities into global sensitivity derivatives is developed. A variety of representations of the wing pressure field are developed and tested to determine the most accurate and efficient scheme for representing the field outside of the aerodynamic code. Chebyshev polynomials are used to globally fit the pressure field. This approach had some difficulties in representing local variations in the field, so a variety of local interpolation polynomial pressure representations are also implemented. These panel based representations use a constant pressure value, a bilinearly interpolated value. or a biquadraticallv interpolated value. The interpolation polynomial approaches do an excellent job of reducing the numerical problems of the global approach for comparable computational effort. Regardless of the pressure representation used. sensitivity and response results with excellent accuracy have been produced for large integrated quantities such as wing tip deflection and trim angle of attack. The sensitivities of such things as individual generalized displacements have been found with fair accuracy. In general, accuracy is found to be proportional to the relative size of the derivatives to the quantity itself.

  4. Sensitivity of global river discharges under Holocene and future climate conditions

    NASA Astrophysics Data System (ADS)

    Aerts, J. C. J. H.; Renssen, H.; Ward, P. J.; de Moel, H.; Odada, E.; Bouwer, L. M.; Goosse, H.

    2006-10-01

    A comparative analysis of global river basins shows that some river discharges are more sensitive to future climate change for the coming century than to natural climate variability over the last 9000 years. In these basins (Ganges, Mekong, Volta, Congo, Amazon, Murray-Darling, Rhine, Oder, Yukon) future discharges increase by 6-61%. These changes are of similar magnitude to changes over the last 9000 years. Some rivers (Nile, Syr Darya) experienced strong reductions in discharge over the last 9000 years (17-56%), but show much smaller responses to future warming. The simulation results for the last 9000 years are validated with independent proxy data.

  5. Stochastic Simulations and Sensitivity Analysis of Plasma Flow

    SciTech Connect

    Lin, Guang; Karniadakis, George E.

    2008-08-01

    For complex physical systems with large number of random inputs, it will be very expensive to perform stochastic simulations for all of the random inputs. Stochastic sensitivity analysis is introduced in this paper to rank the significance of random inputs, provide information on which random input has more influence on the system outputs and the coupling or interaction effect among different random inputs. There are two types of numerical methods in stochastic sensitivity analysis: local and global methods. The local approach, which relies on a partial derivative of output with respect to parameters, is used to measure the sensitivity around a local operating point. When the system has strong nonlinearities and parameters fluctuate within a wide range from their nominal values, the local sensitivity does not provide full information to the system operators. On the other side, the global approach examines the sensitivity from the entire range of the parameter variations. The global screening methods, based on One-At-a-Time (OAT) perturbation of parameters, rank the significant parameters and identify their interaction among a large number of parameters. Several screening methods have been proposed in literature, i.e., the Morris method, Cotter's method, factorial experimentation, and iterated fractional factorial design. In this paper, the Morris method, Monte Carlo sampling method, Quasi-Monte Carlo method and collocation method based on sparse grids are studied. Additionally, two MHD examples are presented to demonstrate the capability and efficiency of the stochastic sensitivity analysis, which can be used as a pre-screening technique for reducing the dimensionality and hence the cost in stochastic simulations.

  6. Sensitivity Analysis for Coupled Aero-structural Systems

    NASA Technical Reports Server (NTRS)

    Giunta, Anthony A.

    1999-01-01

    A novel method has been developed for calculating gradients of aerodynamic force and moment coefficients for an aeroelastic aircraft model. This method uses the Global Sensitivity Equations (GSE) to account for the aero-structural coupling, and a reduced-order modal analysis approach to condense the coupling bandwidth between the aerodynamic and structural models. Parallel computing is applied to reduce the computational expense of the numerous high fidelity aerodynamic analyses needed for the coupled aero-structural system. Good agreement is obtained between aerodynamic force and moment gradients computed with the GSE/modal analysis approach and the same quantities computed using brute-force, computationally expensive, finite difference approximations. A comparison between the computational expense of the GSE/modal analysis method and a pure finite difference approach is presented. These results show that the GSE/modal analysis approach is the more computationally efficient technique if sensitivity analysis is to be performed for two or more aircraft design parameters.

  7. Sensitivity analysis for solar plates

    NASA Astrophysics Data System (ADS)

    Aster, R. W.

    1986-02-01

    Economic evaluation methods and analyses of emerging photovoltaic (PV) technology since 1976 was prepared. This type of analysis was applied to the silicon research portion of the PV Program in order to determine the importance of this research effort in relationship to the successful development of commercial PV systems. All four generic types of PV that use silicon were addressed: crystal ingots grown either by the Czochralski method or an ingot casting method; ribbons pulled directly from molten silicon; an amorphous silicon thin film; and use of high concentration lenses. Three technologies were analyzed: the Union Carbide fluidized bed reactor process, the Hemlock process, and the Union Carbide Komatsu process. The major components of each process were assessed in terms of the costs of capital equipment, labor, materials, and utilities. These assessments were encoded as the probabilities assigned by experts for achieving various cost values or production rates.

  8. Sensitivity analysis for solar plates

    NASA Technical Reports Server (NTRS)

    Aster, R. W.

    1986-01-01

    Economic evaluation methods and analyses of emerging photovoltaic (PV) technology since 1976 was prepared. This type of analysis was applied to the silicon research portion of the PV Program in order to determine the importance of this research effort in relationship to the successful development of commercial PV systems. All four generic types of PV that use silicon were addressed: crystal ingots grown either by the Czochralski method or an ingot casting method; ribbons pulled directly from molten silicon; an amorphous silicon thin film; and use of high concentration lenses. Three technologies were analyzed: the Union Carbide fluidized bed reactor process, the Hemlock process, and the Union Carbide Komatsu process. The major components of each process were assessed in terms of the costs of capital equipment, labor, materials, and utilities. These assessments were encoded as the probabilities assigned by experts for achieving various cost values or production rates.

  9. Multiple predictor smoothing methods for sensitivity analysis.

    SciTech Connect

    Helton, Jon Craig; Storlie, Curtis B.

    2006-08-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.

  10. River Runoff Sensitivity in Eastern Siberia to Global Climate Warming

    NASA Astrophysics Data System (ADS)

    Georgiadi, A. G.; Milyukova, I. P.; Kashutina, E.

    2008-12-01

    During several last decades significant climate warming is observed in permafrost regions of Eastern Siberia. These changes include rise of air temperature as well as precipitation. Changes in regional climate are accompanied with river runoff changes. The analysis of the data shows that in the past 25 years, the largest contribution to the annual river runoff increase in the lower reaches of the Lena (Kyusyur) is made (in descending order) by the Lena river watershed (above Tabaga), the Aldan river (Okhotsky Perevoz), and the Vilyui river (Khatyryk-Khomo). The similar relation is also retained in the case of flood, with the seasonal river runoff of the Vilyui river being slightly decreased. Completely different relations are noted in winter, when a substantial river runoff increase is recorded in the lower reaches of the Lena river. In this case the major contribution to the winter river runoff increase in the Lena outlet is made by the winter river runoff increase on the Vilyui river. Unlike the above cases, the summer-fall river runoff in the lower reaches of the Lena river tends to decrease, which is similar to the trend exhibited by the Vilyui river. At the same time, the river runoff of the Lena (Tabaga) and Aldan (Verkhoyansky Perevoz) rivers increase. According to the results of hydrological modeling the expected anthropogenic climate warming in XXI century can bring more significant river runoff increase in the Lena river basin as compared with the recent one. Hydrological responses to climate warming have been evaluated for the plain part of the Lena river basin basing on a macroscale hydrological model featuring simplified description of processes developed in Institute of Geography of the Russian Academy of Sciences. Two atmosphere-ocean global circulation models included in the IPCC (ECHAM4/OPY3 and GFDL-R30) were used as scenarios of future global climate. According to the results of hydrological modeling the expected anthropogenic climate warming in

  11. Sensitivity analysis of the critical speed in railway vehicle dynamics

    NASA Astrophysics Data System (ADS)

    Bigoni, D.; True, H.; Engsig-Karup, A. P.

    2014-05-01

    We present an approach to global sensitivity analysis aiming at the reduction of its computational cost without compromising the results. The method is based on sampling methods, cubature rules, high-dimensional model representation and total sensitivity indices. It is applied to a half car with a two-axle Cooperrider bogie, in order to study the sensitivity of the critical speed with respect to the suspension parameters. The importance of a certain suspension component is expressed by the variance in critical speed that is ascribable to it. This proves to be useful in the identification of parameters for which the accuracy of their values is critically important. The approach has a general applicability in many engineering fields and does not require the knowledge of the particular solver of the dynamical system. This analysis can be used as part of the virtual homologation procedure and to help engineers during the design phase of complex systems.

  12. Dynamic analysis of global copper flows. Global stocks, postconsumer material flows, recycling indicators, and uncertainty evaluation.

    PubMed

    Glöser, Simon; Soulier, Marcel; Tercero Espinoza, Luis A

    2013-06-18

    We present a dynamic model of global copper stocks and flows which allows a detailed analysis of recycling efficiencies, copper stocks in use, and dissipated and landfilled copper. The model is based on historical mining and refined copper production data (1910-2010) enhanced by a unique data set of recent global semifinished goods production and copper end-use sectors provided by the copper industry. To enable the consistency of the simulated copper life cycle in terms of a closed mass balance, particularly the matching of recycled metal flows to reported historical annual production data, a method was developed to estimate the yearly global collection rates of end-of-life (postconsumer) scrap. Based on this method, we provide estimates of 8 different recycling indicators over time. The main indicator for the efficiency of global copper recycling from end-of-life (EoL) scrap--the EoL recycling rate--was estimated to be 45% on average, ± 5% (one standard deviation) due to uncertainty and variability over time in the period 2000-2010. As uncertainties of specific input data--mainly concerning assumptions on end-use lifetimes and their distribution--are high, a sensitivity analysis with regard to the effect of uncertainties in the input data on the calculated recycling indicators was performed. The sensitivity analysis included a stochastic (Monte Carlo) uncertainty evaluation with 10(5) simulation runs. PMID:23725041

  13. Adjoint sensitivity analysis of an ultrawideband antenna

    SciTech Connect

    Stephanson, M B; White, D A

    2011-07-28

    The frequency domain finite element method using H(curl)-conforming finite elements is a robust technique for full-wave analysis of antennas. As computers become more powerful, it is becoming feasible to not only predict antenna performance, but also to compute sensitivity of antenna performance with respect to multiple parameters. This sensitivity information can then be used for optimization of the design or specification of manufacturing tolerances. In this paper we review the Adjoint Method for sensitivity calculation, and apply it to the problem of optimizing a Ultrawideband antenna.

  14. Sensitivity Analysis in the Model Web

    NASA Astrophysics Data System (ADS)

    Jones, R.; Cornford, D.; Boukouvalas, A.

    2012-04-01

    The Model Web, and in particular the Uncertainty enabled Model Web being developed in the UncertWeb project aims to allow model developers and model users to deploy and discover models exposed as services on the Web. In particular model users will be able to compose model and data resources to construct and evaluate complex workflows. When discovering such workflows and models on the Web it is likely that the users might not have prior experience of the model behaviour in detail. It would be particularly beneficial if users could undertake a sensitivity analysis of the models and workflows they have discovered and constructed to allow them to assess the sensitivity to their assumptions and parameters. This work presents a Web-based sensitivity analysis tool which provides computationally efficient sensitivity analysis methods for models exposed on the Web. In particular the tool is tailored to the UncertWeb profiles for both information models (NetCDF and Observations and Measurements) and service specifications (WPS and SOAP/WSDL). The tool employs emulation technology where this is found to be possible, constructing statistical surrogate models for the models or workflows, to allow very fast variance based sensitivity analysis. Where models are too complex for emulation to be possible, or evaluate too fast for this to be necessary the original models are used with a carefully designed sampling strategy. A particular benefit of constructing emulators of the models or workflow components is that within the framework these can be communicated and evaluated at any physical location. The Web-based tool and backend API provide several functions to facilitate the process of creating an emulator and performing sensitivity analysis. A user can select a model exposed on the Web and specify the input ranges. Once this process is complete, they are able to perform screening to discover important inputs, train an emulator, and validate the accuracy of the trained emulator. In

  15. Shortwave heating response to water vapor as a significant source of uncertainty in global hydrological sensitivity in CMIP5 models

    NASA Astrophysics Data System (ADS)

    DeAngelis, A. M.; Qu, X.; Hall, A. D.; Klein, S. A.

    2014-12-01

    The hydrological cycle is expected to undergo substantial changes in response to global warming, with all climate models predicting an increase in global-mean precipitation. There is considerable spread among models, however, in the projected increase of global-mean precipitation, even when normalized by surface temperature change. In an attempt to develop a better physical understanding of the causes of this intermodel spread, we investigate the rapid and temperature-mediated responses of global-mean precipitation to CO2 forcing in an ensemble of CMIP5 models by applying regression analysis to pre-industrial and abrupt quadrupled CO2 simulations, and focus on the atmospheric radiative terms that balance global precipitation. The intermodel spread in the temperature-mediated component, which dominates the spread in total hydrological sensitivity, is highly correlated with the spread in temperature-mediated clear-sky shortwave (SW) atmospheric heating among models. Upon further analysis of the sources of intermodel variability in SW heating, we find that increases of upper atmosphere and (to a lesser extent) total column water vapor in response to 1K surface warming only partly explain intermodel differences in the SW response. Instead, most of the spread in the SW heating term is explained by intermodel differences in the sensitivity of SW absorption to fixed changes in column water vapor. This suggests that differences in SW radiative transfer codes among models are the dominant source of variability in the response of atmospheric SW heating to warming. Better understanding of the SW heating sensitivity to water vapor in climate models appears to be critical for reducing uncertainty in the global hydrological response to future warming. Current work entails analysis of observations to potentially constrain the intermodel spread in SW sensitivity to water vapor, as well as more detailed investigation of the radiative transfer schemes in different models and how

  16. A simple global carbon and energy coupled cycle model for global warming simulation: sensitivity to the light saturation effect

    NASA Astrophysics Data System (ADS)

    Ichii, Kazuhito; Matsui, Yohei; Murakami, Kazutaka; Mukai, Toshikazu; Yamaguchi, Yasushi; Ogawa, Katsuro

    2003-04-01

    A simple Earth system model, the Four-Spheres Cycle of Energy and Mass (4-SCEM) model, has been developed to simulate global warming due to anthropogenic CO2 emission. The model consists of the Atmosphere-Earth Heat Cycle (AEHC) model, the Four Spheres Carbon Cycle (4-SCC) model, and their feedback processes. The AEHC model is a one-dimensional radiative convective model, which includes the greenhouse effect of CO2 and H2O, and one cloud layer. The 4-SCC model is a box-type carbon cycle model, which includes biospheric CO2 fertilization, vegetation area variation, the vegetation light saturation effect and the HILDA oceanic carbon cycle model. The feedback processes between carbon cycle and climate considered in the model are temperature dependencies of water vapor content, soil decomposition and ocean surface chemistry. The future status of the global carbon cycle and climate was simulated up to the year 2100 based on the "business as usual" (IS92a) emission scenario, followed by a linear decline in emissions to zero in the year 2200. The atmospheric CO2 concentration reaches 645 ppmv in 2100 and a peak of 760 ppmv approximately in the year 2170, and becomes a steady state with 600 ppmv. The projected CO2 concentration was lower than those of the past carbon cycle studies, because we included the light saturation effect of vegetation. The sensitivity analysis showed that uncertainties derived from the light saturation effect of vegetation and land use CO2 emissions were the primary cause of uncertainties in projecting future CO2 concentrations. The climate feedback effects showed rather small sensitivities compared with the impacts of those two effects. Satellite-based net primary production trends analyses can somewhat decrease the uncertainty in quantifying CO2 emissions due to land use changes. On the other hand, as the estimated parameter in vegetation light saturation was poorly constrained, we have to quantify and constrain the effect more accurately.

  17. Sensitivity analysis and application in exploration geophysics

    NASA Astrophysics Data System (ADS)

    Tang, R.

    2013-12-01

    In exploration geophysics, the usual way of dealing with geophysical data is to form an Earth model describing underground structure in the area of investigation. The resolved model, however, is based on the inversion of survey data which is unavoidable contaminated by various noises and is sampled in a limited number of observation sites. Furthermore, due to the inherent non-unique weakness of inverse geophysical problem, the result is ambiguous. And it is not clear that which part of model features is well-resolved by the data. Therefore the interpretation of the result is intractable. We applied a sensitivity analysis to address this problem in magnetotelluric(MT). The sensitivity, also named Jacobian matrix or the sensitivity matrix, is comprised of the partial derivatives of the data with respect to the model parameters. In practical inversion, the matrix can be calculated by direct modeling of the theoretical response for the given model perturbation, or by the application of perturbation approach and reciprocity theory. We now acquired visualized sensitivity plot by calculating the sensitivity matrix and the solution is therefore under investigation that the less-resolved part is indicated and should not be considered in interpretation, while the well-resolved parameters can relatively be convincing. The sensitivity analysis is hereby a necessary and helpful tool for increasing the reliability of inverse models. Another main problem of exploration geophysics is about the design strategies of joint geophysical survey, i.e. gravity, magnetic & electromagnetic method. Since geophysical methods are based on the linear or nonlinear relationship between observed data and subsurface parameters, an appropriate design scheme which provides maximum information content within a restricted budget is quite difficult. Here we firstly studied sensitivity of different geophysical methods by mapping the spatial distribution of different survey sensitivity with respect to the

  18. SEP thrust subsystem performance sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Atkins, K. L.; Sauer, C. G., Jr.; Kerrisk, D. J.

    1973-01-01

    This is a two-part report on solar electric propulsion (SEP) performance sensitivity analysis. The first part describes the preliminary analysis of the SEP thrust system performance for an Encke rendezvous mission. A detailed description of thrust subsystem hardware tolerances on mission performance is included together with nominal spacecraft parameters based on these tolerances. The second part describes the method of analysis and graphical techniques used in generating the data for Part 1. Included is a description of both the trajectory program used and the additional software developed for this analysis. Part 2 also includes a comprehensive description of the use of the graphical techniques employed in this performance analysis.

  19. Cultural Sensitivity: The Key to Teaching Global Business.

    ERIC Educational Resources Information Center

    Timm, Judee A.

    2003-01-01

    More ethical practices in business begin with ethical training in business schools. International business education classes can compare corporate codes and actual behavior; explore the role of cultural differences in values, principles, and standards; and analyze ethical dilemmas in a global environment. (SK)

  20. Probabilistic sensitivity analysis in health economics.

    PubMed

    Baio, Gianluca; Dawid, A Philip

    2015-12-01

    Health economic evaluations have recently become an important part of the clinical and medical research process and have built upon more advanced statistical decision-theoretic foundations. In some contexts, it is officially required that uncertainty about both parameters and observable variables be properly taken into account, increasingly often by means of Bayesian methods. Among these, probabilistic sensitivity analysis has assumed a predominant role. The objective of this article is to review the problem of health economic assessment from the standpoint of Bayesian statistical decision theory with particular attention to the philosophy underlying the procedures for sensitivity analysis. PMID:21930515

  1. A numerical comparison of sensitivity analysis techniques

    SciTech Connect

    Hamby, D.M.

    1993-12-31

    Engineering and scientific phenomena are often studied with the aid of mathematical models designed to simulate complex physical processes. In the nuclear industry, modeling the movement and consequence of radioactive pollutants is extremely important for environmental protection and facility control. One of the steps in model development is the determination of the parameters most influential on model results. A {open_quotes}sensitivity analysis{close_quotes} of these parameters is not only critical to model validation but also serves to guide future research. A previous manuscript (Hamby) detailed many of the available methods for conducting sensitivity analyses. The current paper is a comparative assessment of several methods for estimating relative parameter sensitivity. Method practicality is based on calculational ease and usefulness of the results. It is the intent of this report to demonstrate calculational rigor and to compare parameter sensitivity rankings resulting from various sensitivity analysis techniques. An atmospheric tritium dosimetry model (Hamby) is used here as an example, but the techniques described can be applied to many different modeling problems. Other investigators (Rose; Dalrymple and Broyd) present comparisons of sensitivity analyses methodologies, but none as comprehensive as the current work.

  2. Pediatric Pain, Predictive Inference, and Sensitivity Analysis.

    ERIC Educational Resources Information Center

    Weiss, Robert

    1994-01-01

    Coping style and effects of counseling intervention on pain tolerance was studied for 61 elementary school students through immersion of hands in cold water. Bayesian predictive inference tools are able to distinguish between subject characteristics and manipulable treatments. Sensitivity analysis strengthens the certainty of conclusions about…

  3. Identifying sensitive ranges in global warming precipitation change dependence on convective parameters

    NASA Astrophysics Data System (ADS)

    Bernstein, Diana N.; Neelin, J. David

    2016-06-01

    A branch-run perturbed-physics ensemble in the Community Earth System Model estimates impacts of parameters in the deep convection scheme on current hydroclimate and on end-of-century precipitation change projections under global warming. Regional precipitation change patterns prove highly sensitive to these parameters, especially in the tropics with local changes exceeding 3 mm/d, comparable to the magnitude of the predicted change and to differences in global warming predictions among the Coupled Model Intercomparison Project phase 5 models. This sensitivity is distributed nonlinearly across the feasible parameter range, notably in the low-entrainment range of the parameter for turbulent entrainment in the deep convection scheme. This suggests that a useful target for parameter sensitivity studies is to identify such disproportionately sensitive "dangerous ranges." The low-entrainment range is used to illustrate the reduction in global warming regional precipitation sensitivity that could occur if this dangerous range can be excluded based on evidence from current climate.

  4. Sparing of Sensitivity to Biological Motion but Not of Global Motion after Early Visual Deprivation

    ERIC Educational Resources Information Center

    Hadad, Bat-Sheva; Maurer, Daphne; Lewis, Terri L.

    2012-01-01

    Patients deprived of visual experience during infancy by dense bilateral congenital cataracts later show marked deficits in the perception of global motion (dorsal visual stream) and global form (ventral visual stream). We expected that they would also show marked deficits in sensitivity to biological motion, which is normally processed in the…

  5. NIR sensitivity analysis with the VANE

    NASA Astrophysics Data System (ADS)

    Carrillo, Justin T.; Goodin, Christopher T.; Baylot, Alex E.

    2016-05-01

    Near infrared (NIR) cameras, with peak sensitivity around 905-nm wavelengths, are increasingly used in object detection applications such as pedestrian detection, occupant detection in vehicles, and vehicle detection. In this work, we present the results of simulated sensitivity analysis for object detection with NIR cameras. The analysis was conducted using high performance computing (HPC) to determine the environmental effects on object detection in different terrains and environmental conditions. The Virtual Autonomous Navigation Environment (VANE) was used to simulate highresolution models for environment, terrain, vehicles, and sensors. In the experiment, an active fiducial marker was attached to the rear bumper of a vehicle. The camera was mounted on a following vehicle that trailed at varying standoff distances. Three different terrain conditions (rural, urban, and forest), two environmental conditions (clear and hazy), three different times of day (morning, noon, and evening), and six different standoff distances were used to perform the sensor sensitivity analysis. The NIR camera that was used for the simulation is the DMK firewire monochrome on a pan-tilt motor. Standoff distance was varied along with environment and environmental conditions to determine the critical failure points for the sensor. Feature matching was used to detect the markers in each frame of the simulation, and the percentage of frames in which one of the markers was detected was recorded. The standoff distance produced the biggest impact on the performance of the camera system, while the camera system was not sensitive to environment conditions.

  6. Geothermal well cost sensitivity analysis: current status

    SciTech Connect

    Carson, C.C.; Lin, Y.T.

    1980-01-01

    The geothermal well-cost model developed by Sandia National Laboratories is being used to analyze the sensitivity of well costs to improvements in geothermal drilling technology. Three interim results from this modeling effort are discussed. The sensitivity of well costs to bit parameters, rig parameters, and material costs; an analysis of the cost reduction potential of an advanced bit; and a consideration of breakeven costs for new cementing technology. All three results illustrate that the well-cost savings arising from any new technology will be highly site-dependent but that in specific wells the advances considered can result in significant cost reductions.

  7. Sensitivity analysis for magnetic induction tomography.

    PubMed

    Soleimani, Manuchehr; Jersey-Willuhn, Karen

    2004-01-01

    This work focuses on sensitivity analysis of magnetic induction tomography in terms of theoretical modelling and numerical implementation. We will explain a new and efficient method to determine the Jacobian matrix, directly from the results of the forward solution. The results presented are for the eddy current approximation, and are given in terms of magnetic vector potential, which is computationally convenient, and which may be extracted directly from the FE solution of the forward problem. Examples of sensitivity maps for an opposite sensor geometry are also shown. PMID:17271947

  8. MUSE instrument global performance analysis

    NASA Astrophysics Data System (ADS)

    Loupias, M.; Bacon, R.; Caillier, P.; Fleischmann, A.; Jarno, A.; Kelz, A.; Kosmalski, J.; Laurent, F.; Le Floch, M.; Lizon, J. L.; Manescau, A.; Nicklas, H.; Parès, L.; Pécontal, A.; Reiss, R.; Remillieux, A.; Renault, E.; Roth, M. M.; Rupprecht, G.; Stuik, R.

    2010-07-01

    MUSE (Multi Unit Spectroscopic Explorer) is a second generation instrument developed for ESO (European Southern Observatory) and will be assembled to the VLT (Very Large Telescope) in 2012. The MUSE instrument can simultaneously record 90.000 spectra in the visible wavelength range (465-930nm), across a 1*1arcmin2 field of view, thanks to 24 identical Integral Field Units (IFU). A collaboration of 7 institutes has successfully passed the Final Design Review and is currently working on the first sub-assemblies. The sharing of performances has been based on 5 main functional sub-systems. The Fore Optics sub-system derotates and anamorphoses the VLT Nasmyth focal plane image, the Splitting and Relay Optics associated with the Main Structure are feeding each IFU with 1/24th of the field of view. Each IFU is composed of a 3D function insured by an image slicer system and a spectrograph, and a detection function by a 4k*4k CCD cooled down to 163°K. The 5th function is the calibration and data reduction of the instrument. This article depicts the breakdown of performances between these sub-systems (throughput, image quality...), and underlines the constraining parameters of the interfaces either internal or with the VLT. The validation of all these requirements is a critical task started a few months ago which requires a clear traceability and performances analysis.

  9. Sensitivity analysis techniques for models of human behavior.

    SciTech Connect

    Bier, Asmeret Brooke

    2010-09-01

    Human and social modeling has emerged as an important research area at Sandia National Laboratories due to its potential to improve national defense-related decision-making in the presence of uncertainty. To learn about which sensitivity analysis techniques are most suitable for models of human behavior, different promising methods were applied to an example model, tested, and compared. The example model simulates cognitive, behavioral, and social processes and interactions, and involves substantial nonlinearity, uncertainty, and variability. Results showed that some sensitivity analysis methods create similar results, and can thus be considered redundant. However, other methods, such as global methods that consider interactions between inputs, can generate insight not gained from traditional methods.

  10. Evaluating the sensitivity of local temperature distributions to global climate change

    NASA Astrophysics Data System (ADS)

    Chapman, S. C.; Stainforth, D. A.; Watkins, N. W.

    2012-04-01

    Climate change adaptation activities takes place at regional and local scales. The sensitivity of climate to increasing greenhouse gases is, however, most often studied at the global scale [Knutti and Hegerl 2008, and references therein]. At adaptation relevant spatial scales information is most often based on simulations of complex climate models [Murphy et al. 2009, Tebaldi et al. 2005]. These face significant questions of robustness and reliability as a basis for forecasts on such scales [Stainforth et al., 2007]. Here we propose a different approach, using observational timeseries to evaluate the sensitivity of different parts of the local climatic distribution. There are many advantages to such an approach: it avoids issues relating to model imperfections, it can be focused on decision relevant thresholds [e.g. Porter and Semenov, 2005], and it inherently integrates information relating to local climatic influences. Our approach takes timeseries of local daily temperature from specific locations and extracts the changing cumulative distribution function (cdf) over time. We use the e-obs dataset to construct such cdf-timeseries for locations across Europe. We analyse these changing cdfs using a simple mathematical deconstruction of how the difference between two observations from two different time periods can be assigned to the combination of natural variability and/or the consequences of climate change. This deconstruction facilitates an assessment of the sensitivity of different quantiles of the distributions. These sensitivities are shown to be geographically varying across Europe; as one would expect given the different influences on local climate between, say, Western Scotland and central Italy. We nevertheless find many regionally consistent patterns of response of potential value in adaptation planning. Both the methodology and a sensitivity analysis will be presented. The technique has the potential to be applied to many other variables in addition to

  11. Evaluating the sensitivity of local temperature distributions to global climate change

    NASA Astrophysics Data System (ADS)

    Chapman, S. C.; Stainforth, D.; Watkins, N. W.

    2012-12-01

    Climate change adaptation activities takes place at regional and local scales. The sensitivity of climate to increasing greenhouse gases is, however, most often studied at the global scale [Knutti and Hegerl 2008, and references therein]. At adaptation relevant spatial scales information is most often based on simulations of complex climate models [Murphy et al. 2009, Tebaldi et al. 2005]. These face significant questions of robustness and reliability as a basis for forecasts on such scales [Stainforth et al., 2007]. Here we propose a different approach, using observational timeseries to evaluate the sensitivity of different parts of the local climatic distribution. There are many advantages to such an approach: it avoids issues relating to model imperfections, it can be focused on decision relevant thresholds [e.g. Porter and Semenov, 2005], and it inherently integrates information relating to local climatic influences. Our approach takes timeseries of local daily temperature from specific locations and extracts the changing cumulative distribution function (cdf) over time. We use the e-obs dataset to construct such cdf-timeseries for locations across Europe. We analyse these changing cdfs using a simple mathematical deconstruction of how the difference between two observations from two different time periods can be assigned to the combination of natural variability and/or the consequences of climate change. This deconstruction facilitates an assessment of the sensitivity of different quantiles of the distributions. These sensitivities are shown to be geographically varying across Europe; as one would expect given the different influences on local climate between, say, Western Scotland and central Italy. We nevertheless find many regionally consistent patterns of response of potential value in adaptation planning. Both the methodology and a sensitivity analysis will be presented. The technique has the potential to be applied to many other variables in addition to

  12. Nursing-sensitive indicators: a concept analysis

    PubMed Central

    Heslop, Liza; Lu, Sai

    2014-01-01

    Aim To report a concept analysis of nursing-sensitive indicators within the applied context of the acute care setting. Background The concept of ‘nursing sensitive indicators’ is valuable to elaborate nursing care performance. The conceptual foundation, theoretical role, meaning, use and interpretation of the concept tend to differ. The elusiveness of the concept and the ambiguity of its attributes may have hindered research efforts to advance its application in practice. Design Concept analysis. Data sources Using ‘clinical indicators’ or ‘quality of nursing care’ as subject headings and incorporating keyword combinations of ‘acute care’ and ‘nurs*’, CINAHL and MEDLINE with full text in EBSCOhost databases were searched for English language journal articles published between 2000–2012. Only primary research articles were selected. Methods A hybrid approach was undertaken, incorporating traditional strategies as per Walker and Avant and a conceptual matrix based on Holzemer's Outcomes Model for Health Care Research. Results The analysis revealed two main attributes of nursing-sensitive indicators. Structural attributes related to health service operation included: hours of nursing care per patient day, nurse staffing. Outcome attributes related to patient care included: the prevalence of pressure ulcer, falls and falls with injury, nosocomial selective infection and patient/family satisfaction with nursing care. Conclusion This concept analysis may be used as a basis to advance understandings of the theoretical structures that underpin both research and practical application of quality dimensions of nursing care performance. PMID:25113388

  13. Rotary absorption heat pump sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Bamberger, J. A.; Zalondek, F. R.

    1990-03-01

    Conserve Resources, Incorporated is currently developing an innovative, patented absorption heat pump. The heat pump uses rotation and thin film technology to enhance the absorption process and to provide a more efficient, compact system. The results are presented of a sensitivity analysis of the rotary absorption heat pump (RAHP) performance conducted to further the development of a 1-ton RAHP. The objective of the uncertainty analysis was to determine the sensitivity of RAHP steady state performance to uncertainties in design parameters. Prior to conducting the uncertainty analysis, a computer model was developed to describe the performance of the RAHP thermodynamic cycle. The RAHP performance is based on many interrelating factors, not all of which could be investigated during the sensitivity analysis. Confirmatory measurements of LiBr/H2O properties during absorber/generator operation will provide experimental verification that the system is operating as it was designed to operate. Quantities to be measured include: flow rate in the absorber and generator, film thickness, recirculation rate, and the effects of rotational speed on these parameters.

  14. A climate sensitivity test using a global cloud resolving model under an aqua planet condition

    NASA Astrophysics Data System (ADS)

    Miura, Hiroaki; Tomita, Hirofumi; Nasuno, Tomoe; Iga, Shin-ichi; Satoh, Masaki; Matsuno, Taroh

    2005-10-01

    A global Cloud Resolving Model (CRM) is used in a climate sensitivity test for an aqua planet in this first attempt to evaluate climate sensitivity without cumulus parameterizations. Results from a control experiment and an experiment with global sea surface temperature (SST) warmer by 2 K are examined. Notable features in the simulation with warmer SST include a wider region of active convection, a weaker Hadley circulation, mid-tropospheric moistening in the subtropics, and more clouds in the extratropics. Negative feedback from short-wave radiation reduces the climate sensitivity parameter compared to a result in a more conventional model with a cumulus parameterization.

  15. Global thermohaline circulation. Part 2: Sensitivity with interactive atmospheric transports

    SciTech Connect

    Wang, X.; Stone, P.H.; Marotzke, J.

    1999-01-01

    A hybrid coupled ocean-atmospheric model is used to investigate the stability of the thermohaline circulation (THC) to an increase in the surface freshwater forcing in the presence of interactive meridional transports in the atmosphere. The ocean component is the idealized global general circulation model used in Part 1. The atmospheric model assumes fixed latitudinal structure of the heat and moisture transports, and the amplitudes are calculated separately for each hemisphere from the large-scale sea surface temperature (SST) and SST gradient, using parameterizations based on baroclinic stability theory. The ocean-atmosphere heat and freshwater exchanges are calculated as residuals of the steady-state atmospheric budgets. Owing to the ocean component`s weak heat transport, the model has too strong a meridional SST gradient when driven with observed atmospheric meridional transports. When the latter are made interactive, the conveyor belt circulation collapses. A flux adjustment is introduced in which the efficiency of the atmospheric transports is lowered to match the too low efficiency of the ocean component. The feedbacks between the THC and both the atmospheric heat and moisture transports are positive, whether atmospheric transports are interactive in the Northern Hemisphere, the Southern Hemisphere, or both. However, the feedbacks operate differently in the northern and southern Hemispheres, because the Pacific THC dominates in the Southern Hemisphere, and deep water formation in the two hemispheres is negatively correlated. The feedbacks in the two hemisphere do not necessarily reinforce each other because they have opposite effects on low-latitude temperatures. The model is qualitatively similar in stability to one with conventional additive flux adjustment, but quantitatively more stable.

  16. Global Precipitation Analysis Using Satellite Observations

    NASA Technical Reports Server (NTRS)

    Adler, Robert F.; Huffman, George; Curtis, Scott; Bolvin, David; Nelkin, Eric

    2002-01-01

    Global precipitation analysis covering the last few decades and the impact of the new TRMM (Tropical Rainfall Measuring Mission) observations are reviewed in the context of weather and climate applications. All the data sets discussed are the result of mergers of information from multiple satellites and gauges, where available. The focus of the talk is on TRMM-based 3 hr. analyses that use TRMM to calibrate polar-orbit microwave observations from SSM/I (and other satellites) and geosynchronous IR observations and merges the various calibrated observations into a final, 3 hr. resolution map. This TRMM standard product will be available for the entire TRMM period (January 1998-present) at the end of 2002. A real-time version of this merged product is being produced and is available at 0.25 deg latitude-longitude resolution over the latitude range from 50 deg N-50 deg S. Examples will be shown, including its use in monitoring flood conditions and in relating weather-scale patterns to climate-scale patterns. The 3-hourly analysis is placed in the context of two research products of the World Climate Research Program's (WCRP/GEWEX) Global Precipitation Climatology Project (GPCP). The first is the 23 year, monthly, globally complete precipitation analysis that is used to explore global and regional variations and trends and is compared to the much shorter TRMM tropical data set. The GPCP data set shows no significant global trend in precipitation over the twenty years, unlike the positive trend in global surface temperatures over the past century. Regional trends are also analyzed. A trend pattern that is a combination of both El Nino and La Nina precipitation features is evident in the Goodyear data set. This pattern is related to an increase with time in the number of combined months of El Nino and La Nina during the 23 year period. Monthly anomalies of precipitation are related to ENSO variations with clear signals extending into middle and high latitudes of both

  17. Global Optimization and Broadband Analysis Software for Interstellar Chemistry (GOBASIC)

    NASA Astrophysics Data System (ADS)

    Rad, Mary L.; Zou, Luyao; Sanders, James L.; Widicus Weaver, Susanna L.

    2016-01-01

    Context. Broadband receivers that operate at millimeter and submillimeter frequencies necessitate the development of new tools for spectral analysis and interpretation. Simultaneous, global, multimolecule, multicomponent analysis is necessary to accurately determine the physical and chemical conditions from line-rich spectra that arise from sources like hot cores. Aims: We aim to provide a robust and efficient automated analysis program to meet the challenges presented with the large spectral datasets produced by radio telescopes. Methods: We have written a program in the MATLAB numerical computing environment for simultaneous global analysis of broadband line surveys. The Global Optimization and Broadband Analysis Software for Interstellar Chemistry (GOBASIC) program uses the simplifying assumption of local thermodynamic equilibrium (LTE) for spectral analysis to determine molecular column density, temperature, and velocity information. Results: GOBASIC achieves simultaneous, multimolecule, multicomponent fitting for broadband spectra. The number of components that can be analyzed at once is only limited by the available computational resources. Analysis of subsequent sets of molecules or components is performed iteratively while taking the previous fits into account. All features of a given molecule across the entire window are fitted at once, which is preferable to the rotation diagram approach because global analysis is less sensitive to blended features and noise features in the spectra. In addition, the fitting method used in GOBASIC is insensitive to the initial conditions chosen, the fitting is automated, and fitting can be performed in a parallel computing environment. These features make GOBASIC a valuable improvement over previously available LTE analysis methods. A copy of the sofware is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/585/A23

  18. Estimation of global aortic pulse wave velocity by flow-sensitive 4D MRI.

    PubMed

    Markl, Michael; Wallis, Wolf; Brendecke, Stefanie; Simon, Jan; Frydrychowicz, Alex; Harloff, Andreas

    2010-06-01

    The aim of this study was to determine the value of flow-sensitive four-dimensional MRI for the assessment of pulse wave velocity as a measure of vessel compliance in the thoracic aorta. Findings in 12 young healthy volunteers were compared with those in 25 stroke patients with aortic atherosclerosis and an age-matched normal control group (n = 9). Results from pulse wave velocity calculations incorporated velocity data from the entire aorta and were compared to those of standard methods based on flow waveforms at only two specific anatomic landmarks. Global aortic pulse wave velocity was higher in patients with atherosclerosis (7.03 +/- 0.24 m/sec) compared to age-matched controls (6.40 +/- 0.32 m/sec). Both were significantly (P < 0.001) increased compared to younger volunteers (4.39 +/- 0.32 m/sec). Global aortic pulse wave velocity in young volunteers was in good agreement with previously reported MRI studies and catheter measurements. Estimation of measurement inaccuracies and error propagation analysis demonstrated only minor uncertainties in measured flow waveforms and moderate relative errors below 16% for aortic compliance in all 46 subjects. These results demonstrate the feasibility of pulse wave velocity calculation based on four-dimensional MRI data by exploiting its full volumetric coverage, which may also be an advantage over standard two-dimensional techniques in the often-distorted route of the aorta in patients with atherosclerosis. PMID:20512861

  19. Trends in sensitivity analysis practice in the last decade.

    PubMed

    Ferretti, Federico; Saltelli, Andrea; Tarantola, Stefano

    2016-10-15

    The majority of published sensitivity analyses (SAs) are either local or one factor-at-a-time (OAT) analyses, relying on unjustified assumptions of model linearity and additivity. Global approaches to sensitivity analyses (GSA) which would obviate these shortcomings, are applied by a minority of researchers. By reviewing the academic literature on SA, we here present a bibliometric analysis of the trends of different SA practices in last decade. The review has been conducted both on some top ranking journals (Nature and Science) and through an extended analysis in the Elsevier's Scopus database of scientific publications. After correcting for the global growth in publications, the amount of papers performing a generic SA has notably increased over the last decade. Even if OAT is still the most largely used technique in SA, there is a clear increase in the use of GSA with preference respectively for regression and variance-based techniques. Even after adjusting for the growth of publications in the sole modelling field, to which SA and GSA normally apply, the trend is confirmed. Data about regions of origin and discipline are also briefly discussed. The results above are confirmed when zooming on the sole articles published in chemical modelling, a field historically proficient in the use of SA methods. PMID:26934843

  20. Global QCD Analysis and Hadron Collider Physics

    SciTech Connect

    Tung, W.-K.

    2005-03-22

    The role of global QCD analysis of parton distribution functions (PDFs) in collider physics at the Tevatron and LHC is surveyed. Current status of PDF analyses are reviewed, emphasizing the uncertainties and the open issues. The stability of NLO QCD global analysis and its prediction on 'standard candle' W/Z cross sections at hadron colliders are discussed. The importance of the precise measurement of various W/Z cross sections at the Tevatron in advancing our knowledge of PDFs, hence in enhancing the capabilities of making significant progress in W mass and top quark parameter measurements, as well as the discovery potentials of Higgs and New Physics at the Tevatron and LHC, is emphasized.

  1. The Theoretical Foundation of Sensitivity Analysis for GPS

    NASA Astrophysics Data System (ADS)

    Shikoska, U.; Davchev, D.; Shikoski, J.

    2008-10-01

    In this paper the equations of sensitivity analysis are derived and established theoretical underpinnings for the analyses. Paper propounds a land-vehicle navigation concepts and definition for sensitivity analysis. Equations of sensitivity analysis are presented for a linear Kalman filter and case study is given to illustrate the use of sensitivity analysis to the reader. At the end of the paper, extensions that are required for this research are made to the basic equations of sensitivity analysis specifically; the equations of sensitivity analysis are re-derived for a linearized Kalman filter.

  2. A global analysis of island pyrogeography

    NASA Astrophysics Data System (ADS)

    Trauernicht, C.; Murphy, B. P.

    2014-12-01

    Islands have provided insight into the ecological role of fire worldwide through research on the positive feedbacks between fire and nonnative grasses, particularly in the Hawaiian Islands. However, the global extent and frequency of fire on islands as an ecological disturbance has received little attention, possibly because 'natural fires' on islands are typically limited to infrequent dry lightning strikes and isolated volcanic events. But because most contemporary fires on islands are anthropogenic, islands provide ideal systems with which to understand the linkages between socio-economic development, shifting fire regimes, and ecological change. Here we use the density of satellite-derived (MODIS) active fire detections for the years 2000-2014 and global data sets of vegetation, climate, population density, and road development to examine the drivers of fire activity on islands at the global scale, and compare these results to existing pyrogeographic models derived from continental data sets. We also use the Hawaiian Islands as a case study to understand the extent to which novel fire regimes can pervade island ecosystems. The global analysis indicates that fire is a frequent disturbance across islands worldwide, strongly affected by human activities, indicating people can more readily override climatic drivers than on continental land masses. The extent of fire activity derived from local records in the Hawaiian Islands reveals that our global analysis likely underestimates the prevalence of fire among island systems and that the combined effects of human activity and invasion by nonnative grasses can create conditions for frequent and relatively large-scale fires. Understanding the extent of these novel fire regimes, and mitigating their impacts, is critical to reducing the current and rapid degradation of native island ecosystems worldwide.

  3. Simple Sensitivity Analysis for Orion GNC

    NASA Technical Reports Server (NTRS)

    Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar

    2013-01-01

    The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch.We describe in this paper a sensitivity analysis tool (Critical Factors Tool or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.

  4. Bayesian sensitivity analysis of bifurcating nonlinear models

    NASA Astrophysics Data System (ADS)

    Becker, W.; Worden, K.; Rowson, J.

    2013-01-01

    Sensitivity analysis allows one to investigate how changes in input parameters to a system affect the output. When computational expense is a concern, metamodels such as Gaussian processes can offer considerable computational savings over Monte Carlo methods, albeit at the expense of introducing a data modelling problem. In particular, Gaussian processes assume a smooth, non-bifurcating response surface. This work highlights a recent extension to Gaussian processes which uses a decision tree to partition the input space into homogeneous regions, and then fits separate Gaussian processes to each region. In this way, bifurcations can be modelled at region boundaries and different regions can have different covariance properties. To test this method, both the treed and standard methods were applied to the bifurcating response of a Duffing oscillator and a bifurcating FE model of a heart valve. It was found that the treed Gaussian process provides a practical way of performing uncertainty and sensitivity analysis on large, potentially-bifurcating models, which cannot be dealt with by using a single GP, although an open problem remains how to manage bifurcation boundaries that are not parallel to coordinate axes.

  5. A Post-Monte-Carlo Sensitivity Analysis Code

    Energy Science and Technology Software Center (ESTSC)

    2000-04-04

    SATOOL (Sensitivity Analysis TOOL) is a code for sensitivity analysis, following an uncertainity analysis with Monte Carlo simulations. Sensitivity analysis identifies those input variables, whose variance contributes dominatly to the variance in the output. This analysis can be used to reduce the variance in the output variables by redefining the "sensitive" variables with greater precision, i.e. with lower variance. The code identifies a group of sensitive variables, ranks them in the order of importance andmore » also quantifies the relative importance among the sensitive variables.« less

  6. Long Trajectory for the Development of Sensitivity to Global and Biological Motion

    ERIC Educational Resources Information Center

    Hadad, Bat-Sheva; Maurer, Daphne; Lewis, Terri L.

    2011-01-01

    We used a staircase procedure to test sensitivity to (1) global motion in random-dot kinematograms moving at 4 degrees and 18 degrees s[superscript -1] and (2) biological motion. Thresholds were defined as (1) the minimum percentage of signal dots (i.e. the maximum percentage of noise dots) necessary for accurate discrimination of upward versus…

  7. HYDROLOGIC SENSITIVITIES OF THE SACRAMENTO-SAN JOAQUIN RIVER BASIN, CA TO GLOBAL WARMING

    EPA Science Inventory

    The hydrologic sensitivities of four medium-sized mountainous catchments in the Sacramento and San Joaquin River basins to long-term global warming were analyzed. he hydrologic response of these catchments, all of which are dominated by spring snowmelt runoff, were simulated by t...

  8. Toward a Globally Sensitive Definition of Inclusive Education Based in Social Justice

    ERIC Educational Resources Information Center

    Shyman, Eric

    2015-01-01

    While many policies, pieces of legislation and educational discourse focus on the concept of inclusion, or inclusive education, the field of education as a whole lacks a clear, precise and comprehensive definition that is both globally sensitive and based in social justice. Even international efforts including the UN Convention on the Rights of…

  9. Scalable analysis tools for sensitivity analysis and UQ (3160) results.

    SciTech Connect

    Karelitz, David B.; Ice, Lisa G.; Thompson, David C.; Bennett, Janine C.; Fabian, Nathan; Scott, W. Alan; Moreland, Kenneth D.

    2009-09-01

    The 9/30/2009 ASC Level 2 Scalable Analysis Tools for Sensitivity Analysis and UQ (Milestone 3160) contains feature recognition capability required by the user community for certain verification and validation tasks focused around sensitivity analysis and uncertainty quantification (UQ). These feature recognition capabilities include crater detection, characterization, and analysis from CTH simulation data; the ability to call fragment and crater identification code from within a CTH simulation; and the ability to output fragments in a geometric format that includes data values over the fragments. The feature recognition capabilities were tested extensively on sample and actual simulations. In addition, a number of stretch criteria were met including the ability to visualize CTH tracer particles and the ability to visualize output from within an S3D simulation.

  10. Updated Chemical Kinetics and Sensitivity Analysis Code

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan

    2005-01-01

    An updated version of the General Chemical Kinetics and Sensitivity Analysis (LSENS) computer code has become available. A prior version of LSENS was described in "Program Helps to Determine Chemical-Reaction Mechanisms" (LEW-15758), NASA Tech Briefs, Vol. 19, No. 5 (May 1995), page 66. To recapitulate: LSENS solves complex, homogeneous, gas-phase, chemical-kinetics problems (e.g., combustion of fuels) that are represented by sets of many coupled, nonlinear, first-order ordinary differential equations. LSENS has been designed for flexibility, convenience, and computational efficiency. The present version of LSENS incorporates mathematical models for (1) a static system; (2) steady, one-dimensional inviscid flow; (3) reaction behind an incident shock wave, including boundary layer correction; (4) a perfectly stirred reactor; and (5) a perfectly stirred reactor followed by a plug-flow reactor. In addition, LSENS can compute equilibrium properties for the following assigned states: enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. For static and one-dimensional-flow problems, including those behind an incident shock wave and following a perfectly stirred reactor calculation, LSENS can compute sensitivity coefficients of dependent variables and their derivatives, with respect to the initial values of dependent variables and/or the rate-coefficient parameters of the chemical reactions.

  11. SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool

    PubMed Central

    Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda

    2008-01-01

    Background It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. Results This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. Conclusion SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes. PMID:18706080

  12. Multicomponent dynamical nucleation theory and sensitivity analysis.

    PubMed

    Kathmann, Shawn M; Schenter, Gregory K; Garrett, Bruce C

    2004-05-15

    Vapor to liquid multicomponent nucleation is a dynamical process governed by a delicate interplay between condensation and evaporation. Since the population of the vapor phase is dominated by monomers at reasonable supersaturations, the formation of clusters is governed by monomer association and dissociation reactions. Although there is no intrinsic barrier in the interaction potential along the minimum energy path for the association process, the formation of a cluster is impeded by a free energy barrier. Dynamical nucleation theory provides a framework in which equilibrium evaporation rate constants can be calculated and the corresponding condensation rate constants determined from detailed balance. The nucleation rate can then be obtained by solving the kinetic equations. The rate constants governing the multistep kinetics of multicomponent nucleation including sensitivity analysis and the potential influence of contaminants will be presented and discussed. PMID:15267849

  13. Sensitivity analysis of periodic matrix population models.

    PubMed

    Caswell, Hal; Shyu, Esther

    2012-12-01

    Periodic matrix models are frequently used to describe cyclic temporal variation (seasonal or interannual) and to account for the operation of multiple processes (e.g., demography and dispersal) within a single projection interval. In either case, the models take the form of periodic matrix products. The perturbation analysis of periodic models must trace the effects of parameter changes, at each phase of the cycle, on output variables that are calculated over the entire cycle. Here, we apply matrix calculus to obtain the sensitivity and elasticity of scalar-, vector-, or matrix-valued output variables. We apply the method to linear models for periodic environments (including seasonal harvest models), to vec-permutation models in which individuals are classified by multiple criteria, and to nonlinear models including both immediate and delayed density dependence. The results can be used to evaluate management strategies and to study selection gradients in periodic environments. PMID:23316494

  14. Global meta-analysis of transcriptomics studies.

    PubMed

    Caldas, José; Vinga, Susana

    2014-01-01

    Transcriptomics meta-analysis aims at re-using existing data to derive novel biological hypotheses, and is motivated by the public availability of a large number of independent studies. Current methods are based on breaking down studies into multiple comparisons between phenotypes (e.g. disease vs. healthy), based on the studies' experimental designs, followed by computing the overlap between the resulting differential expression signatures. While useful, in this methodology each study yields multiple independent phenotype comparisons, and connections are established not between studies, but rather between subsets of the studies corresponding to phenotype comparisons. We propose a rank-based statistical meta-analysis framework that establishes global connections between transcriptomics studies without breaking down studies into sets of phenotype comparisons. By using a rank product method, our framework extracts global features from each study, corresponding to genes that are consistently among the most expressed or differentially expressed genes in that study. Those features are then statistically modelled via a term-frequency inverse-document frequency (TF-IDF) model, which is then used for connecting studies. Our framework is fast and parameter-free; when applied to large collections of Homo sapiens and Streptococcus pneumoniae transcriptomics studies, it performs better than similarity-based approaches in retrieving related studies, using a Medical Subject Headings gold standard. Finally, we highlight via case studies how the framework can be used to derive novel biological hypotheses regarding related studies and the genes that drive those connections. Our proposed statistical framework shows that it is possible to perform a meta-analysis of transcriptomics studies with arbitrary experimental designs by deriving global expression features rather than decomposing studies into multiple phenotype comparisons. PMID:24586684

  15. The global analysis of DEER data

    PubMed Central

    Brandon, Suzanne; Beth, Albert H.; Hustedt, Eric J.

    2012-01-01

    Double Electron–Electron Resonance (DEER) has emerged as a powerful technique for measuring long range distances and distance distributions between paramagnetic centers in biomolecules. This information can then be used to characterize functionally relevant structural and dynamic properties of biological molecules and their macromolecular assemblies. Approaches have been developed for analyzing experimental data from standard four-pulse DEER experiments to extract distance distributions. However, these methods typically use an a priori baseline correction to account for background signals. In the current work an approach is described for direct fitting of the DEER signal using a model for the distance distribution which permits a rigorous error analysis of the fitting parameters. Moreover, this approach does not require a priori background correction of the experimental data and can take into account excluded volume effects on the background signal when necessary. The global analysis of multiple DEER data sets is also demonstrated. Global analysis has the potential to provide new capabilities for extracting distance distributions and additional structural parameters in a wide range of studies. PMID:22578560

  16. Toward the globalization of behavior analysis

    PubMed Central

    Malott, Maria E.

    2004-01-01

    Globalization could facilitate the long-term growth of behavior analysis, and although progress has been made, much yet needs to be done. Given the scarcity of resources, it is suggested that we draw from successes in the development of behavior analysis and establish behavioral programs around the world that embrace research, education, and practice as a focus of systematic globalization efforts. The strategy would require the implementation of cultural contingencies that support initiation and long-term program expansion. For program initiation, contingencies are needed to place pioneer behavior analysts in university units that would be unlikely to start a behavioral program otherwise. The task of these pioneers would be to build a critical mass that would multiply behavior-analytic repertoires, obtain research funding, conduct publishable research, and establish applied settings. For long-term program development, the field should expand internationally as it continues building the infrastructure needed to accelerate the demand for behavioral programs in higher education, scholarly work in behavior analysis, behavior analysts in existing jobs, and behavioral technology in the market place. ImagesFigure 1Figure 2 PMID:22478413

  17. Sensitivity analysis of distributed volcanic source inversion

    NASA Astrophysics Data System (ADS)

    Cannavo', Flavio; Camacho, Antonio G.; González, Pablo J.; Puglisi, Giuseppe; Fernández, José

    2016-04-01

    A recently proposed algorithm (Camacho et al., 2011) claims to rapidly estimate magmatic sources from surface geodetic data without any a priori assumption about source geometry. The algorithm takes the advantages of fast calculation from the analytical models and adds the capability to model free-shape distributed sources. Assuming homogenous elastic conditions, the approach can determine general geometrical configurations of pressured and/or density source and/or sliding structures corresponding to prescribed values of anomalous density, pressure and slip. These source bodies are described as aggregation of elemental point sources for pressure, density and slip, and they fit the whole data (keeping some 3D regularity conditions). Although some examples and applications have been already presented to demonstrate the ability of the algorithm in reconstructing a magma pressure source (e.g. Camacho et al., 2011,Cannavò et al., 2015), a systematic analysis of sensitivity and reliability of the algorithm is still lacking. In this explorative work we present results from a large statistical test designed to evaluate the advantages and limitations of the methodology by assessing its sensitivity to the free and constrained parameters involved in inversions. In particular, besides the source parameters, we focused on the ground deformation network topology, and noise in measurements. The proposed analysis can be used for a better interpretation of the algorithm results in real-case applications. Camacho, A. G., González, P. J., Fernández, J. & Berrino, G. (2011) Simultaneous inversion of surface deformation and gravity changes by means of extended bodies with a free geometry: Application to deforming calderas. J. Geophys. Res. 116. Cannavò F., Camacho A.G., González P.J., Mattia M., Puglisi G., Fernández J. (2015) Real Time Tracking of Magmatic Intrusions by means of Ground Deformation Modeling during Volcanic Crises, Scientific Reports, 5 (10970) doi:10.1038/srep

  18. Global QCD Analysis of Polarized Parton Densities

    SciTech Connect

    Stratmann, Marco

    2009-08-04

    We focus on some highlights of a recent, first global Quantum Chromodynamics (QCD) analysis of the helicity parton distributions of the nucleon, mainly the evidence for a rather small gluon polarization over a limited region of momentum fraction and for interesting flavor patterns in the polarized sea. It is examined how the various sets of data obtained in inclusive and semi-inclusive deep inelastic scattering and polarized proton-proton collisions help to constrain different aspects of the quark, antiquark, and gluon helicity distributions. Uncertainty estimates are performed using both the robust Lagrange multiplier technique and the standard Hessian approach.

  19. Global analysis of the phase calibration operation

    NASA Astrophysics Data System (ADS)

    Lannes, André

    2005-04-01

    A global approach to phase calibration is presented. The corresponding theoretical framework calls on elementary concepts of algebraic graph theory (spanning tree of maximal weight, cycles) and algebraic number theory (lattice, nearest lattice point). The traditional approach can thereby be better understood. In radio imaging and in optical interferometry, the self-calibration procedures must often be conducted with much care. The analysis presented should then help in finding a better compromise between the coverage of the calibration graph (which must be as complete as possible) and the quality of the solution (which must of course be reliable).

  20. Global analysis of the phase calibration operation.

    PubMed

    Lannes, André

    2005-04-01

    A global approach to phase calibration is presented. The corresponding theoretical framework calls on elementary concepts of algebraic graph theory (spanning tree of maximal weight, cycles) and algebraic number theory (lattice, nearest lattice point). The traditional approach can thereby be better understood. In radio imaging and in optical interferometry, the self-calibration procedures must often be conducted with much care. The analysis presented should then help in finding a better compromise between the coverage of the calibration graph (which must be as complete as possible) and the quality of the solution (which must of course be reliable). PMID:15839277

  1. Longitudinal Genetic Analysis of Anxiety Sensitivity

    ERIC Educational Resources Information Center

    Zavos, Helena M. S.; Gregory, Alice M.; Eley, Thalia C.

    2012-01-01

    Anxiety sensitivity is associated with both anxiety and depression and has been shown to be heritable. Little, however, is known about the role of genetic influence on continuity and change of symptoms over time. The authors' aim was to examine the stability of anxiety sensitivity during adolescence. By using a genetically sensitive design, the…

  2. On computational schemes for global-local stress analysis

    NASA Technical Reports Server (NTRS)

    Reddy, J. N.

    1989-01-01

    An overview is given of global-local stress analysis methods and associated difficulties and recommendations for future research. The phrase global-local analysis is understood to be an analysis in which some parts of the domain or structure are identified, for reasons of accurate determination of stresses and displacements or for more refined analysis than in the remaining parts. The parts of refined analysis are termed local and the remaining parts are called global. Typically local regions are small in size compared to global regions, while the computational effort can be larger in local regions than in global regions.

  3. Tsunamis: Global Exposure and Local Risk Analysis

    NASA Astrophysics Data System (ADS)

    Harbitz, C. B.; Løvholt, F.; Glimsdal, S.; Horspool, N.; Griffin, J.; Davies, G.; Frauenfelder, R.

    2014-12-01

    The 2004 Indian Ocean tsunami led to a better understanding of the likelihood of tsunami occurrence and potential tsunami inundation, and the Hyogo Framework for Action (HFA) was one direct result of this event. The United Nations International Strategy for Disaster Risk Reduction (UN-ISDR) adopted HFA in January 2005 in order to reduce disaster risk. As an instrument to compare the risk due to different natural hazards, an integrated worldwide study was implemented and published in several Global Assessment Reports (GAR) by UN-ISDR. The results of the global earthquake induced tsunami hazard and exposure analysis for a return period of 500 years are presented. Both deterministic and probabilistic methods (PTHA) are used. The resulting hazard levels for both methods are compared quantitatively for selected areas. The comparison demonstrates that the analysis is rather rough, which is expected for a study aiming at average trends on a country level across the globe. It is shown that populous Asian countries account for the largest absolute number of people living in tsunami prone areas, more than 50% of the total exposed people live in Japan. Smaller nations like Macao and the Maldives are among the most exposed by population count. Exposed nuclear power plants are limited to Japan, China, India, Taiwan, and USA. On the contrary, a local tsunami vulnerability and risk analysis applies information on population, building types, infrastructure, inundation, flow depth for a certain tsunami scenario with a corresponding return period combined with empirical data on tsunami damages and mortality. Results and validation of a GIS tsunami vulnerability and risk assessment model are presented. The GIS model is adapted for optimal use of data available for each study. Finally, the importance of including landslide sources in the tsunami analysis is also discussed.

  4. Sensitivity Analysis of Wing Aeroelastic Responses

    NASA Technical Reports Server (NTRS)

    Issac, Jason Cherian

    1995-01-01

    Design for prevention of aeroelastic instability (that is, the critical speeds leading to aeroelastic instability lie outside the operating range) is an integral part of the wing design process. Availability of the sensitivity derivatives of the various critical speeds with respect to shape parameters of the wing could be very useful to a designer in the initial design phase, when several design changes are made and the shape of the final configuration is not yet frozen. These derivatives are also indispensable for a gradient-based optimization with aeroelastic constraints. In this study, flutter characteristic of a typical section in subsonic compressible flow is examined using a state-space unsteady aerodynamic representation. The sensitivity of the flutter speed of the typical section with respect to its mass and stiffness parameters, namely, mass ratio, static unbalance, radius of gyration, bending frequency, and torsional frequency is calculated analytically. A strip theory formulation is newly developed to represent the unsteady aerodynamic forces on a wing. This is coupled with an equivalent plate structural model and solved as an eigenvalue problem to determine the critical speed of the wing. Flutter analysis of the wing is also carried out using a lifting-surface subsonic kernel function aerodynamic theory (FAST) and an equivalent plate structural model. Finite element modeling of the wing is done using NASTRAN so that wing structures made of spars and ribs and top and bottom wing skins could be analyzed. The free vibration modes of the wing obtained from NASTRAN are input into FAST to compute the flutter speed. An equivalent plate model which incorporates first-order shear deformation theory is then examined so it can be used to model thick wings, where shear deformations are important. The sensitivity of natural frequencies to changes in shape parameters is obtained using ADIFOR. A simple optimization effort is made towards obtaining a minimum weight

  5. Assessing flood risk at the global scale: model setup, results, and sensitivity

    NASA Astrophysics Data System (ADS)

    Ward, Philip J.; Jongman, Brenden; Sperna Weiland, Frederiek; Bouwman, Arno; van Beek, Rens; Bierkens, Marc F. P.; Ligtvoet, Willem; Winsemius, Hessel C.

    2013-12-01

    Globally, economic losses from flooding exceeded 19 billion in 2012, and are rising rapidly. Hence, there is an increasing need for global-scale flood risk assessments, also within the context of integrated global assessments. We have developed and validated a model cascade for producing global flood risk maps, based on numerous flood return-periods. Validation results indicate that the model simulates interannual fluctuations in flood impacts well. The cascade involves: hydrological and hydraulic modelling; extreme value statistics; inundation modelling; flood impact modelling; and estimating annual expected impacts. The initial results estimate global impacts for several indicators, for example annual expected exposed population (169 million); and annual expected exposed GDP (1383 billion). These results are relatively insensitive to the extreme value distribution employed to estimate low frequency flood volumes. However, they are extremely sensitive to the assumed flood protection standard; developing a database of such standards should be a research priority. Also, results are sensitive to the use of two different climate forcing datasets. The impact model can easily accommodate new, user-defined, impact indicators. We envisage several applications, for example: identifying risk hotspots; calculating macro-scale risk for the insurance industry and large companies; and assessing potential benefits (and costs) of adaptation measures.

  6. Global climate sensitivity derived from ~784,000 years of SST data

    NASA Astrophysics Data System (ADS)

    Friedrich, T.; Timmermann, A.; Tigchelaar, M.; Elison Timm, O.; Ganopolski, A.

    2015-12-01

    Global mean temperatures will increase in response to future increasing greenhouse gas concentrations. The magnitude of this warming for a given radiative forcing is still subject of debate. Here we provide estimates for the equilibrium climate sensitivity using paleo-proxy and modeling data from the last eight glacial cycles (~784,000 years). First of all, two reconstructions of globally averaged surface air temperature (SAT) for the last eight glacial cycles are obtained from two independent sources: one mainly based on a transient model simulation, the other one derived from paleo- SST records and SST network/global SAT scaling factors. Both reconstructions exhibit very good agreement in both amplitude and timing of past SAT variations. In the second step, we calculate the radiative forcings associated with greenhouse gas concentrations, dust concentrations, and surface albedo changes for the last 784, 000 years. The equilibrium climate sensitivity is then derived from the ratio of the SAT anomalies and the radiative forcing changes. Our results reveal that this estimate of the Charney climate sensitivity is a function of the background climate with substantially higher values for warmer climates. Warm phases exhibit an equilibrium climate sensitivity of ~3.70 K per CO2-doubling - more than twice the value derived for cold phases (~1.40 K per 2xCO2). We will show that the current CMIP5 ensemble-mean projection of global warming during the 21st century is supported by our estimate of climate sensitivity derived from climate paleo data of the past 784,000 years.

  7. Sensitivity analysis of volume scattering phase functions.

    PubMed

    Tuchow, Noah; Broughton, Jennifer; Kudela, Raphael

    2016-08-01

    To solve the radiative transfer equation and relate inherent optical properties (IOPs) to apparent optical properties (AOPs), knowledge of the volume scattering phase function is required. Due to the difficulty of measuring the phase function, it is frequently approximated. We explore the sensitivity of derived AOPs to the phase function parameterization, and compare measured and modeled values of both the AOPs and estimated phase functions using data from Monterey Bay, California during an extreme "red tide" bloom event. Using in situ measurements of absorption and attenuation coefficients, as well as two sets of measurements of the volume scattering function (VSF), we compared output from the Hydrolight radiative transfer model to direct measurements. We found that several common assumptions used in parameterizing the radiative transfer model consistently introduced overestimates of modeled versus measured remote-sensing reflectance values. Phase functions from VSF data derived from measurements at multiple wavelengths and a single scattering single angle significantly overestimated reflectances when using the manufacturer-supplied corrections, but were substantially improved using newly published corrections; phase functions calculated from VSF measurements using three angles and three wavelengths and processed using manufacture-supplied corrections were comparable, demonstrating that reasonable predictions can be made using two commercially available instruments. While other studies have reached similar conclusions, our work extends the analysis to coastal waters dominated by an extreme algal bloom with surface chlorophyll concentrations in excess of 100 mg m-3. PMID:27505819

  8. Tilt-Sensitivity Analysis for Space Telescopes

    NASA Technical Reports Server (NTRS)

    Papalexandris, Miltiadis; Waluschka, Eugene

    2003-01-01

    A report discusses a computational-simulation study of phase-front propagation in the Laser Interferometer Space Antenna (LISA), in which space telescopes would transmit and receive metrological laser beams along 5-Gm interferometer arms. The main objective of the study was to determine the sensitivity of the average phase of a beam with respect to fluctuations in pointing of the beam. The simulations account for the effects of obscurations by a secondary mirror and its supporting struts in a telescope, and for the effects of optical imperfections (especially tilt) of a telescope. A significant innovation introduced in this study is a methodology, applicable to space telescopes in general, for predicting the effects of optical imperfections. This methodology involves a Monte Carlo simulation in which one generates many random wavefront distortions and studies their effects through computational simulations of propagation. Then one performs a statistical analysis of the results of the simulations and computes the functional relations among such important design parameters as the sizes of distortions and the mean value and the variance of the loss of performance. These functional relations provide information regarding position and orientation tolerances relevant to design and operation.

  9. Wear-Out Sensitivity Analysis Project Abstract

    NASA Technical Reports Server (NTRS)

    Harris, Adam

    2015-01-01

    During the course of the Summer 2015 internship session, I worked in the Reliability and Maintainability group of the ISS Safety and Mission Assurance department. My project was a statistical analysis of how sensitive ORU's (Orbital Replacement Units) are to a reliability parameter called the wear-out characteristic. The intended goal of this was to determine a worst case scenario of how many spares would be needed if multiple systems started exhibiting wear-out characteristics simultaneously. The goal was also to determine which parts would be most likely to do so. In order to do this, my duties were to take historical data of operational times and failure times of these ORU's and use them to build predictive models of failure using probability distribution functions, mainly the Weibull distribution. Then, I ran Monte Carlo Simulations to see how an entire population of these components would perform. From here, my final duty was to vary the wear-out characteristic from the intrinsic value, to extremely high wear-out values and determine how much the probability of sufficiency of the population would shift. This was done for around 30 different ORU populations on board the ISS.

  10. Sensitivity analysis of hydrodynamic stability operators

    NASA Technical Reports Server (NTRS)

    Schmid, Peter J.; Henningson, Dan S.; Khorrami, Mehdi R.; Malik, Mujeeb R.

    1992-01-01

    The eigenvalue sensitivity for hydrodynamic stability operators is investigated. Classical matrix perturbation techniques as well as the concept of epsilon-pseudoeigenvalues are applied to show that parts of the spectrum are highly sensitive to small perturbations. Applications are drawn from incompressible plane Couette, trailing line vortex flow and compressible Blasius boundary layer flow. Parametric studies indicate a monotonically increasing effect of the Reynolds number on the sensitivity. The phenomenon of eigenvalue sensitivity is due to the non-normality of the operators and their discrete matrix analogs and may be associated with large transient growth of the corresponding initial value problem.

  11. Regional Fast Cloud Feedback Assessment As Constraint on Global Climate Sensitivity

    NASA Astrophysics Data System (ADS)

    Quaas, J.; Kuehne, P.; Block, K.; Salzmann, M.

    2014-12-01

    Uncertainty in climate sensitivity estimates from models relates to inter-model spread in cloud-climate feedback. A sizeable component of the cloud-climate feedback is due to fast adjustments to altered CO2 profiles. This suggests the emerging large-domain season-long cloud-resolving simulations might become useful as reference simulation when performing sensitivity simulations with doubled CO2 concentrations. We assessed the fast cloud feedback in the CMIP5 multi-model ensemble of general circulation models (GCM) to find that in the chosen example region of Central Europe the fast cloud feedback behaves similarly as it does over global land areas in indivual models, yet shows a large inter-model scatter. This result is discussed with respect to the question whether a regional high-resolved model might be suitable to constrain global cloud feedbacks.

  12. Defining a fire year for reporting and analysis of global interannual fire variability

    NASA Astrophysics Data System (ADS)

    Boschetti, Luigi; Roy, David P.

    2008-09-01

    The interannual variability of fire activity has been studied without an explicit investigation of a suitable starting month for yearly calculations. Sensitivity analysis of 37 months of global MODIS active fire detections indicates that a 1-month change in the start of the fire year definition can lead, in the worst case, to a difference of over 6% and over 45% in global and subcontinental scale annual fire totals, respectively. Optimal starting months for analyses of global and subcontinental fire interannual variability are described. The research indicates that a fire year starting in March provides an optimal definition for annual global fire activity.

  13. Global Analysis of Posttranslational Protein Arginylation

    PubMed Central

    Rai, Reena; Bailey, Aaron O; Yates, John R; Wolf, Yuri I; Zebroski, Henry; Kashina, Anna

    2007-01-01

    Posttranslational arginylation is critical for embryogenesis, cardiovascular development, and angiogenesis, but its molecular effects and the identity of proteins arginylated in vivo are largely unknown. Here we report a global analysis of this modification on the protein level and identification of 43 proteins arginylated in vivo on highly specific sites. Our data demonstrate that unlike previously believed, arginylation can occur on any N-terminally exposed residue likely defined by a structural recognition motif on the protein surface, and that it preferentially affects a number of physiological systems, including cytoskeleton and primary metabolic pathways. The results of our study suggest that protein arginylation is a general mechanism for regulation of protein structure and function and outline the potential role of protein arginylation in cell metabolism and embryonic development. PMID:17896865

  14. Adjoint sensitivity analysis of hydrodynamic stability in cyclonic flows

    NASA Astrophysics Data System (ADS)

    Guzman Inigo, Juan; Juniper, Matthew

    2015-11-01

    Cyclonic separators are used in a variety of industries to efficiently separate mixtures of fluid and solid phases by means of centrifugal forces and gravity. In certain circumstances, the vortex core of cyclonic flows is known to precess due to the instability of the flow, which leads to performance reductions. We aim to characterize the unsteadiness using linear stability analysis of the Reynolds Averaged Navier-Stokes (RANS) equations in a global framework. The system of equations, including the turbulence model, is linearised to obtain an eigenvalue problem. Unstable modes corresponding to the dynamics of the large structures of the turbulent flow are extracted. The analysis shows that the most unstable mode is a helical motion which develops around the axis of the flow. This result is in good agreement with LES and experimental analysis, suggesting the validity of the approach. Finally, an adjoint-based sensitivity analysis is performed to determine the regions of the flow that, when altered, have most influence on the frequency and growth-rate of the unstable eigenvalues.

  15. Sensitivity of Photolysis Frequencies and Key Tropospheric Oxidants in a Global Model to Cloud Vertical Distributions and Optical Properties

    NASA Technical Reports Server (NTRS)

    Liu, Hongyu; Crawford, James H.; Considine, David B.; Platnick, Steven E.; Norris, Peter M.; Duncan, Bryan N.; Pierce, Robert B.; Chen, Gao; Yantosca, Robert M.

    2008-01-01

    As a follow-up study to our recent assessment of the radiative effects of clouds on tropospheric chemistry, this paper presents an analysis of the sensitivity of such effects to cloud vertical distributions and optical properties in a global 3-D chemical transport model (GEOS4-Chem CTM). GEOS-Chem was driven with a series of meteorological archives (GEOS1-STRAT, GEOS-3 and GEOS-4) generated by the NASA Goddard Earth Observing System data assimilation system, which have significantly different cloud optical depths (CODs) and vertical distributions. Clouds in GEOS1-STRAT and GEOS-3 have more similar vertical distributions while those in GEOS-4 are optically much thinner in the tropical upper troposphere. We find that the radiative impact of clouds on global photolysis frequencies and hydroxyl radical (OH) is more sensitive to the vertical distribution of clouds than to the magnitude of column CODs. Model simulations with each of the three cloud distributions all show that the change in the global burden of O3 due to clouds is less than 5%. Model perturbation experiments with GEOS-3, where the magnitude of 3-D CODs are progressively varied by -100% to 100%, predict only modest changes (<5%) in global mean OH concentrations. J(O1D), J(NO2) and OH concentrations show the strongest sensitivity for small CODs and become insensitive at large CODs due to saturation effects. Caution should be exercised not to use in photochemical models a value for cloud single scattering albedo lower than about 0.999 in order to be consistent with the current knowledge of cloud absorption at the UV wavelength. Our results have important implications for model intercomparisons and climate feedback on tropospheric photochemistry.

  16. Sensitivity Studies for Space-Based Global Measurements of Atmospheric Carbon Dioxide

    NASA Technical Reports Server (NTRS)

    Mao, Jian-Ping; Kawa, S. Randolph; Bhartia, P. K. (Technical Monitor)

    2001-01-01

    Carbon dioxide (CO2) is well known as the primary forcing agent of global warming. Although the climate forcing due to CO2 is well known, the sources and sinks of CO2 are not well understood. Currently the lack of global atmospheric CO2 observations limits our ability to diagnose the global carbon budget (e.g., finding the so-called "missing sink") and thus limits our ability to understand past climate change and predict future climate response. Space-based techniques are being developed to make high-resolution and high-precision global column CO2 measurements. One of the proposed techniques utilizes the passive remote sensing of Earth's reflected solar radiation at the weaker vibration-rotation band of CO2 in the near infrared (approx. 1.57 micron). We use a line-by-line radiative transfer model to explore the potential of this method. Results of sensitivity studies for CO2 concentration variation and geophysical conditions (i.e., atmospheric temperature, surface reflectivity, solar zenith angle, aerosol, and cirrus cloud) will be presented. We will also present sensitivity results for an O2 A-band (approx. 0.76 micron) sensor that will be needed along with CO2 to make surface pressure and cloud height measurements.

  17. A New Computationally Frugal Method For Sensitivity Analysis Of Environmental Models

    NASA Astrophysics Data System (ADS)

    Rakovec, O.; Hill, M. C.; Clark, M. P.; Weerts, A.; Teuling, R.; Borgonovo, E.; Uijlenhoet, R.

    2013-12-01

    Effective and efficient parameter sensitivity analysis methods are crucial to understand the behaviour of complex environmental models and use of models in risk assessment. This paper proposes a new computationally frugal method for analyzing parameter sensitivity: the Distributed Evaluation of Local Sensitivity Analysis (DELSA). The DELSA method can be considered a hybrid of local and global methods, and focuses explicitly on multiscale evaluation of parameter sensitivity across the parameter space. Results of the DELSA method are compared with the popular global, variance-based Sobol' method and the delta method. We assess the parameter sensitivity of both (1) a simple non-linear reservoir model with only two parameters, and (2) five different "bucket-style" hydrologic models applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both the synthetic and real-world examples, the global Sobol' method and the DELSA method provide similar sensitivities, with the DELSA method providing more detailed insight at much lower computational cost. The ability to understand how sensitivity measures vary through parameter space with modest computational requirements provides exciting new opportunities.

  18. What Do We Mean By Sensitivity Analysis? The Need For A Comprehensive Characterization Of Sensitivity In Earth System Models

    NASA Astrophysics Data System (ADS)

    Razavi, S.; Gupta, H. V.

    2014-12-01

    Sensitivity analysis (SA) is an important paradigm in the context of Earth System model development and application, and provides a powerful tool that serves several essential functions in modelling practice, including 1) Uncertainty Apportionment - attribution of total uncertainty to different uncertainty sources, 2) Assessment of Similarity - diagnostic testing and evaluation of similarities between the functioning of the model and the real system, 3) Factor and Model Reduction - identification of non-influential factors and/or insensitive components of model structure, and 4) Factor Interdependence - investigation of the nature and strength of interactions between the factors, and the degree to which factors intensify, cancel, or compensate for the effects of each other. A variety of sensitivity analysis approaches have been proposed, each of which formally characterizes a different "intuitive" understanding of what is meant by the "sensitivity" of one or more model responses to its dependent factors (such as model parameters or forcings). These approaches are based on different philosophies and theoretical definitions of sensitivity, and range from simple local derivatives and one-factor-at-a-time procedures to rigorous variance-based (Sobol-type) approaches. In general, each approach focuses on, and identifies, different features and properties of the model response and may therefore lead to different (even conflicting) conclusions about the underlying sensitivity. This presentation revisits the theoretical basis for sensitivity analysis, and critically evaluates existing approaches so as to demonstrate their flaws and shortcomings. With this background, we discuss several important properties of response surfaces that are associated with the understanding and interpretation of sensitivity. Finally, a new approach towards global sensitivity assessment is developed that is consistent with important properties of Earth System model response surfaces.

  19. Sensitivity Analysis of a process based erosion model using FAST

    NASA Astrophysics Data System (ADS)

    Gabelmann, Petra; Wienhöfer, Jan; Zehe, Erwin

    2015-04-01

    deposition are related to overland flow velocity using the equation of Engelund and Hansen and the sinking velocity of grain sizes, respectively. The sensitivity analysis was performed based on virtual hillslopes similar to those in the Weiherbach catchment. We applied the FAST-method (Fourier Amplitude Sensitivity Test), which provides a global sensitivity analysis with comparably few model runs. We varied model parameters in predefined and, for the Weiherbach catchment, physically meaningful parameter ranges. Those parameters included rainfall intensity, surface roughness, hillslope geometry, land use, erosion resistance, and soil hydraulic parameters. The results of this study allow guiding further modelling efforts in the Weiherbach catchment with respect to data collection and model modification.

  20. Topographic Avalanche Risk: DEM Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Nazarkulova, Ainura; Strobl, Josef

    2015-04-01

    GIS-based models are frequently used to assess the risk and trigger probabilities of (snow) avalanche releases, based on parameters and geomorphometric derivatives like elevation, exposure, slope, proximity to ridges and local relief energy. Numerous models, and model-based specific applications and project results have been published based on a variety of approaches and parametrizations as well as calibrations. Digital Elevation Models (DEM) come with many different resolution (scale) and quality (accuracy) properties, some of these resulting from sensor characteristics and DEM generation algorithms, others from different DEM processing workflows and analysis strategies. This paper explores the impact of using different types and characteristics of DEMs for avalanche risk modeling approaches, and aims at establishing a framework for assessing the uncertainty of results. The research question is derived from simply demonstrating the differences in release risk areas and intensities by applying identical models to DEMs with different properties, and then extending this into a broader sensitivity analysis. For the quantification and calibration of uncertainty parameters different metrics are established, based on simple value ranges, probabilities, as well as fuzzy expressions and fractal metrics. As a specific approach the work on DEM resolution-dependent 'slope spectra' is being considered and linked with the specific application of geomorphometry-base risk assessment. For the purpose of this study focusing on DEM characteristics, factors like land cover, meteorological recordings and snowpack structure and transformation are kept constant, i.e. not considered explicitly. Key aims of the research presented here are the development of a multi-resolution and multi-scale framework supporting the consistent combination of large area basic risk assessment with local mitigation-oriented studies, and the transferability of the latter into areas without availability of

  1. Sensitivity analysis of textural parameters for vertebroplasty

    NASA Astrophysics Data System (ADS)

    Tack, Gye Rae; Lee, Seung Y.; Shin, Kyu-Chul; Lee, Sung J.

    2002-05-01

    Vertebroplasty is one of the newest surgical approaches for the treatment of the osteoporotic spine. Recent studies have shown that it is a minimally invasive, safe, promising procedure for patients with osteoporotic fractures while providing structural reinforcement of the osteoporotic vertebrae as well as immediate pain relief. However, treatment failures due to excessive bone cement injection have been reported as one of complications. It is believed that control of bone cement volume seems to be one of the most critical factors in preventing complications. We believed that an optimal bone cement volume could be assessed based on CT data of a patient. Gray-level run length analysis was used to extract textural information of the trabecular. At initial stage of the project, four indices were used to represent the textural information: mean width of intertrabecular space, mean width of trabecular, area of intertrabecular space, and area of trabecular. Finally, the area of intertrabecular space was selected as a parameter to estimate an optimal bone cement volume and it was found that there was a strong linear relationship between these 2 variables (correlation coefficient = 0.9433, standard deviation = 0.0246). In this study, we examined several factors affecting overall procedures. The threshold level, the radius of rolling ball and the size of region of interest were selected for the sensitivity analysis. As the level of threshold varied with 9, 10, and 11, the correlation coefficient varied from 0.9123 to 0.9534. As the radius of rolling ball varied with 45, 50, and 55, the correlation coefficient varied from 0.9265 to 0.9730. As the size of region of interest varied with 58 x 58, 64 x 64, and 70 x 70, the correlation coefficient varied from 0.9685 to 0.9468. Finally, we found that strong correlation between actual bone cement volume (Y) and the area (X) of the intertrabecular space calculated from the binary image and the linear equation Y = 0.001722 X - 2

  2. [Ecological sensitivity of Shanghai City based on GIS spatial analysis].

    PubMed

    Cao, Jian-jun; Liu, Yong-juan

    2010-07-01

    In this paper, five sensitivity factors affecting the eco-environment of Shanghai City, i.e., rivers and lakes, historical relics and forest parks, geological disasters, soil pollution, and land use, were selected, and their weights were determined by analytic hierarchy process. Combining with GIS spatial analysis technique, the sensitivities of these factors were classified into four grades, i.e., highly sensitive, moderately sensitive, low sensitive, and insensitive, and the spatial distribution of the ecological sensitivity of Shanghai City was figured out. There existed a significant spatial differentiation in the ecological sensitivity of the City, and the insensitive, low sensitive, moderately sensitive, and highly sensitive areas occupied 37.07%, 5.94%, 38.16%, and 18.83%, respectively. Some suggestions on the City's zoning protection and construction were proposed. This study could provide scientific references for the City's environmental protection and economic development. PMID:20879541

  3. Global analysis of the immune response

    NASA Astrophysics Data System (ADS)

    Ribeiro, Leonardo C.; Dickman, Ronald; Bernardes, Américo T.

    2008-10-01

    The immune system may be seen as a complex system, characterized using tools developed in the study of such systems, for example, surface roughness and its associated Hurst exponent. We analyze densitometric (Panama blot) profiles of immune reactivity, to classify individuals into groups with similar roughness statistics. We focus on a population of individuals living in a region in which malaria endemic, as well as a control group from a disease-free region. Our analysis groups individuals according to the presence, or absence, of malaria symptoms and number of malaria manifestations. Applied to the Panama blot data, our method proves more effective at discriminating between groups than principal-components analysis or super-paramagnetic clustering. Our findings provide evidence that some phenomena observed in the immune system can be only understood from a global point of view. We observe similar tendencies between experimental immune profiles and those of artificial profiles, obtained from an immune network model. The statistical entropy of the experimental profiles is found to exhibit variations similar to those observed in the Hurst exponent.

  4. Sensitivity of water scarcity events to ENSO-driven climate variability at the global scale

    NASA Astrophysics Data System (ADS)

    Veldkamp, T. I. E.; Eisner, S.; Wada, Y.; Aerts, J. C. J. H.; Ward, P. J.

    2015-10-01

    Globally, freshwater shortage is one of the most dangerous risks for society. Changing hydro-climatic and socioeconomic conditions have aggravated water scarcity over the past decades. A wide range of studies show that water scarcity will intensify in the future, as a result of both increased consumptive water use and, in some regions, climate change. Although it is well-known that El Niño-Southern Oscillation (ENSO) affects patterns of precipitation and drought at global and regional scales, little attention has yet been paid to the impacts of climate variability on water scarcity conditions, despite its importance for adaptation planning. Therefore, we present the first global-scale sensitivity assessment of water scarcity to ENSO, the most dominant signal of climate variability. We show that over the time period 1961-2010, both water availability and water scarcity conditions are significantly correlated with ENSO-driven climate variability over a large proportion of the global land area (> 28.1 %); an area inhabited by more than 31.4 % of the global population. We also found, however, that climate variability alone is often not enough to trigger the actual incidence of water scarcity events. The sensitivity of a region to water scarcity events, expressed in terms of land area or population exposed, is determined by both hydro-climatic and socioeconomic conditions. Currently, the population actually impacted by water scarcity events consists of 39.6 % (CTA: consumption-to-availability ratio) and 41.1 % (WCI: water crowding index) of the global population, whilst only 11.4 % (CTA) and 15.9 % (WCI) of the global population is at the same time living in areas sensitive to ENSO-driven climate variability. These results are contrasted, however, by differences in growth rates found under changing socioeconomic conditions, which are relatively high in regions exposed to water scarcity events. Given the correlations found between ENSO and water availability and

  5. Sensitivity of Water Scarcity Events to ENSO-Driven Climate Variability at the Global Scale

    NASA Technical Reports Server (NTRS)

    Veldkamp, T. I. E.; Eisner, S.; Wada, Y.; Aerts, J. C. J. H.; Ward, P. J.

    2015-01-01

    Globally, freshwater shortage is one of the most dangerous risks for society. Changing hydro-climatic and socioeconomic conditions have aggravated water scarcity over the past decades. A wide range of studies show that water scarcity will intensify in the future, as a result of both increased consumptive water use and, in some regions, climate change. Although it is well-known that El Niño- Southern Oscillation (ENSO) affects patterns of precipitation and drought at global and regional scales, little attention has yet been paid to the impacts of climate variability on water scarcity conditions, despite its importance for adaptation planning. Therefore, we present the first global-scale sensitivity assessment of water scarcity to ENSO, the most dominant signal of climate variability. We show that over the time period 1961-2010, both water availability and water scarcity conditions are significantly correlated with ENSO-driven climate variability over a large proportion of the global land area (> 28.1 %); an area inhabited by more than 31.4% of the global population. We also found, however, that climate variability alone is often not enough to trigger the actual incidence of water scarcity events. The sensitivity of a region to water scarcity events, expressed in terms of land area or population exposed, is determined by both hydro-climatic and socioeconomic conditions. Currently, the population actually impacted by water scarcity events consists of 39.6% (CTA: consumption-to-availability ratio) and 41.1% (WCI: water crowding index) of the global population, whilst only 11.4% (CTA) and 15.9% (WCI) of the global population is at the same time living in areas sensitive to ENSO-driven climate variability. These results are contrasted, however, by differences in growth rates found under changing socioeconomic conditions, which are relatively high in regions exposed to water scarcity events. Given the correlations found between ENSO and water availability and scarcity

  6. Global-local finite element analysis of composite structures

    SciTech Connect

    Deibler, J.E.

    1992-06-01

    The development of layered finite elements has facilitated analysis of laminated composite structures. However, the analysis of a structure containing both isotropic and composite materials remains a difficult problem. A methodology has been developed to conduct a ``global-local`` finite element analysis. A ``global`` analysis of the entire structure is conducted at the appropriate loads with the composite portions replaced with an orthotropic material of equivalent materials properties. A ``local`` layered composite analysis is then conducted on the region of interest. The displacement results from the ``global`` analysis are used as loads to the ``local`` analysis. the laminate stresses and strains can then be examined and failure criteria evaluated.

  7. Global-local finite element analysis of composite structures

    SciTech Connect

    Deibler, J.E.

    1992-06-01

    The development of layered finite elements has facilitated analysis of laminated composite structures. However, the analysis of a structure containing both isotropic and composite materials remains a difficult problem. A methodology has been developed to conduct a global-local'' finite element analysis. A global'' analysis of the entire structure is conducted at the appropriate loads with the composite portions replaced with an orthotropic material of equivalent materials properties. A local'' layered composite analysis is then conducted on the region of interest. The displacement results from the global'' analysis are used as loads to the local'' analysis. the laminate stresses and strains can then be examined and failure criteria evaluated.

  8. Limits to global and Australian temperature change this century based on expert judgment of climate sensitivity

    NASA Astrophysics Data System (ADS)

    Grose, Michael R.; Colman, Robert; Bhend, Jonas; Moise, Aurel F.

    2016-07-01

    The projected warming of surface air temperature at the global and regional scale by the end of the century is directly related to emissions and Earth's climate sensitivity. Projections are typically produced using an ensemble of climate models such as CMIP5, however the range of climate sensitivity in models doesn't cover the entire range considered plausible by expert judgment. Of particular interest from a risk-management perspective is the lower impact outcome associated with low climate sensitivity and the low-probability, high-impact outcomes associated with the top of the range. Here we scale climate model output to the limits of expert judgment of climate sensitivity to explore these limits. This scaling indicates an expanded range of projected change for each emissions pathway, including a much higher upper bound for both the globe and Australia. We find the possibility of exceeding a warming of 2 °C since pre-industrial is projected under high emissions for every model even scaled to the lowest estimate of sensitivity, and is possible under low emissions under most estimates of sensitivity. Although these are not quantitative projections, the results may be useful to inform thinking about the limits to change until the sensitivity can be more reliably constrained, or this expanded range of possibilities can be explored in a more formal way. When viewing climate projections, accounting for these low-probability but high-impact outcomes in a risk management approach can complement the focus on the likely range of projections. They can also highlight the scale of the potential reduction in range of projections, should tight constraints on climate sensitivity be established by future research.

  9. Sensitivity of Photolysis Frequencies and Key Tropospheric Oxidants in a Global Model to Cloud Vertical Distributions and Optical Properties

    NASA Technical Reports Server (NTRS)

    Liu, Hongyu; Crawford, James H.; Considine, David B.; Platnick, Steven; Norris, Peter M.; Duncan, Bryan N.; Pierce, Robert B.; Chen, Gao; Yantosca, Robert M.

    2009-01-01

    Clouds affect tropospheric photochemistry through modification of solar radiation that determines photolysis frequencies. As a follow-up study to our recent assessment of the radiative effects of clouds on tropospheric chemistry, this paper presents an analysis of the sensitivity of such effects to cloud vertical distributions and optical properties (cloud optical depths (CODs) and cloud single scattering albedo), in a global 3-D chemical transport model (GEOS-Chem). GEOS-Chem was driven with a series of meteorological archives (GEOS1- STRAT, GEOS-3 and GEOS-4) generated by the NASA Goddard Earth Observing System data assimilation system. Clouds in GEOS1-STRAT and GEOS-3 have more similar vertical distributions (with substantially smaller CODs in GEOS1-STRAT) while those in GEOS-4 are optically much thinner in the tropical upper troposphere. We find that the radiative impact of clouds on global photolysis frequencies and hydroxyl radical (OH) is more sensitive to the vertical distribution of clouds than to the magnitude of column CODs. With random vertical overlap for clouds, the model calculated changes in global mean OH (J(O1D), J(NO2)) due to the radiative effects of clouds in June are about 0.0% (0.4%, 0.9%), 0.8% (1.7%, 3.1%), and 7.3% (4.1%, 6.0%), for GEOS1-STRAT, GEOS-3 and GEOS-4, respectively; the geographic distributions of these quantities show much larger changes, with maximum decrease in OH concentrations of approx.15-35% near the midlatitude surface. The much larger global impact of clouds in GEOS-4 reflects the fact that more solar radiation is able to penetrate through the optically thin upper-tropospheric clouds, increasing backscattering from low-level clouds. Model simulations with each of the three cloud distributions all show that the change in the global burden of ozone due to clouds is less than 5%. Model perturbation experiments with GEOS-3, where the magnitude of 3-D CODs are progressively varied from -100% to 100%, predict only modest

  10. Sensitivity of photolysis frequencies and key tropospheric oxidants in a global model to cloud vertical distributions and optical properties

    NASA Astrophysics Data System (ADS)

    Liu, Hongyu; Crawford, James H.; Considine, David B.; Platnick, Steven; Norris, Peter M.; Duncan, Bryan N.; Pierce, Robert B.; Chen, Gao; Yantosca, Robert M.

    2009-05-01

    Clouds directly affect tropospheric photochemistry through modification of solar radiation that determines photolysis frequencies. As a follow-up study to our recent assessment of these direct radiative effects of clouds on tropospheric chemistry, this paper presents an analysis of the sensitivity of such effects to cloud vertical distributions and optical properties (cloud optical depths (CODs) and cloud single scattering albedo), in a global three-dimensional (3-D) chemical transport model. The model was driven with a series of meteorological archives (GEOS-1 in support of the Stratospheric Tracers of Atmospheric Transport mission, or GEOS1-STRAT, GEOS-3, and GEOS-4) generated by the NASA Goddard Earth Observing System (GEOS) data assimilation system. Clouds in GEOS1-STRAT and GEOS-3 have more similar vertical distributions (with substantially smaller CODs in GEOS1-STRAT) while those in GEOS-4 are optically much thinner in the tropical upper troposphere. We find that the radiative impact of clouds on global photolysis frequencies and hydroxyl radical (OH) is more sensitive to the vertical distribution of clouds than to the magnitude of column CODs. With random vertical overlap for clouds, the model calculated changes in global mean OH (J(O1D), J(NO2)) due to the radiative effects of clouds in June are about 0.0% (0.4%, 0.9%), 0.8% (1.7%, 3.1%), and 7.3% (4.1%, 6.0%) for GEOS1-STRAT, GEOS-3, and GEOS-4, respectively; the geographic distributions of these quantities show much larger changes, with maximum decrease in OH concentrations of ˜15-35% near the midlatitude surface. The much larger global impact of clouds in GEOS-4 reflects the fact that more solar radiation is able to penetrate through the optically thin upper tropospheric clouds, increasing backscattering from low-level clouds. Model simulations with each of the three cloud distributions all show that the change in the global burden of ozone due to clouds is less than 5%. Model perturbation experiments