Global sensitivity analysis of groundwater transport
NASA Astrophysics Data System (ADS)
Cvetkovic, V.; Soltani, S.; Vigouroux, G.
2015-12-01
In this work we address the model and parametric sensitivity of groundwater transport using the Lagrangian-Stochastic Advection-Reaction (LaSAR) methodology. The 'attenuation index' is used as a relevant and convenient measure of the coupled transport mechanisms. The coefficients of variation (CV) for seven uncertain parameters are assumed to be between 0.25 and 3.5, the highest value being for the lower bound of the mass transfer coefficient k0 . In almost all cases, the uncertainties in the macro-dispersion (CV = 0.35) and in the mass transfer rate k0 (CV = 3.5) are most significant. The global sensitivity analysis using Sobol and derivative-based indices yield consistent rankings on the significance of different models and/or parameter ranges. The results presented here are generic however the proposed methodology can be easily adapted to specific conditions where uncertainty ranges in models and/or parameters can be estimated from field and/or laboratory measurements.
Global sensitivity analysis in wind energy assessment
NASA Astrophysics Data System (ADS)
Tsvetkova, O.; Ouarda, T. B.
2012-12-01
Wind energy is one of the most promising renewable energy sources. Nevertheless, it is not yet a common source of energy, although there is enough wind potential to supply world's energy demand. One of the most prominent obstacles on the way of employing wind energy is the uncertainty associated with wind energy assessment. Global sensitivity analysis (SA) studies how the variation of input parameters in an abstract model effects the variation of the variable of interest or the output variable. It also provides ways to calculate explicit measures of importance of input variables (first order and total effect sensitivity indices) in regard to influence on the variation of the output variable. Two methods of determining the above mentioned indices were applied and compared: the brute force method and the best practice estimation procedure In this study a methodology for conducting global SA of wind energy assessment at a planning stage is proposed. Three sampling strategies which are a part of SA procedure were compared: sampling based on Sobol' sequences (SBSS), Latin hypercube sampling (LHS) and pseudo-random sampling (PRS). A case study of Masdar City, a showcase of sustainable living in the UAE, is used to exemplify application of the proposed methodology. Sources of uncertainty in wind energy assessment are very diverse. In the case study the following were identified as uncertain input parameters: the Weibull shape parameter, the Weibull scale parameter, availability of a wind turbine, lifetime of a turbine, air density, electrical losses, blade losses, ineffective time losses. Ineffective time losses are defined as losses during the time when the actual wind speed is lower than the cut-in speed or higher than the cut-out speed. The output variable in the case study is the lifetime energy production. Most influential factors for lifetime energy production are identified with the ranking of the total effect sensitivity indices. The results of the present
Multitarget global sensitivity analysis of n-butanol combustion.
Zhou, Dingyu D Y; Davis, Michael J; Skodje, Rex T
2013-05-01
A model for the combustion of butanol is studied using a recently developed theoretical method for the systematic improvement of the kinetic mechanism. The butanol mechanism includes 1446 reactions, and we demonstrate that it is straightforward and computationally feasible to implement a full global sensitivity analysis incorporating all the reactions. In addition, we extend our previous analysis of ignition-delay targets to include species targets. The combination of species and ignition targets leads to multitarget global sensitivity analysis, which allows for a more complete mechanism validation procedure than we previously implemented. The inclusion of species sensitivity analysis allows for a direct comparison between reaction pathway analysis and global sensitivity analysis. PMID:23530815
Global and Local Sensitivity Analysis Methods for a Physical System
ERIC Educational Resources Information Center
Morio, Jerome
2011-01-01
Sensitivity analysis is the study of how the different input variations of a mathematical model influence the variability of its output. In this paper, we review the principle of global and local sensitivity analyses of a complex black-box system. A simulated case of application is given at the end of this paper to compare both approaches.…
Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation
NASA Astrophysics Data System (ADS)
Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten
2015-04-01
Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.
Towards More Efficient and Effective Global Sensitivity Analysis
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin
2014-05-01
Sensitivity analysis (SA) is an important paradigm in the context of model development and application. There are a variety of approaches towards sensitivity analysis that formally describe different "intuitive" understandings of the sensitivity of a single or multiple model responses to different factors such as model parameters or forcings. These approaches are based on different philosophies and theoretical definitions of sensitivity and range from simple local derivatives to rigorous Sobol-type analysis-of-variance approaches. In general, different SA methods focus and identify different properties of the model response and may lead to different, sometimes even conflicting conclusions about the underlying sensitivities. This presentation revisits the theoretical basis for sensitivity analysis, critically evaluates the existing approaches in the literature, and demonstrates their shortcomings through simple examples. Important properties of response surfaces that are associated with the understanding and interpretation of sensitivities are outlined. A new approach towards global sensitivity analysis is developed that attempts to encompass the important, sensitivity-related properties of response surfaces. Preliminary results show that the new approach is superior to the standard approaches in the literature in terms of effectiveness and efficiency.
Optimizing human activity patterns using global sensitivity analysis
Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.
2014-01-01
Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations. PMID:25580080
Optimizing human activity patterns using global sensitivity analysis
Fairchild, Geoffrey; Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.
2013-12-10
Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. Here we use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Finally, though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.
Optimizing human activity patterns using global sensitivity analysis
Fairchild, Geoffrey; Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.
2013-12-10
Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimizationmore » problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. Here we use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Finally, though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.« less
A Global Sensitivity Analysis Methodology for Multi-physics Applications
Tong, C H; Graziani, F R
2007-02-02
Experiments are conducted to draw inferences about an entire ensemble based on a selected number of observations. This applies to both physical experiments as well as computer experiments, the latter of which are performed by running the simulation models at different input configurations and analyzing the output responses. Computer experiments are instrumental in enabling model analyses such as uncertainty quantification and sensitivity analysis. This report focuses on a global sensitivity analysis methodology that relies on a divide-and-conquer strategy and uses intelligent computer experiments. The objective is to assess qualitatively and/or quantitatively how the variabilities of simulation output responses can be accounted for by input variabilities. We address global sensitivity analysis in three aspects: methodology, sampling/analysis strategies, and an implementation framework. The methodology consists of three major steps: (1) construct credible input ranges; (2) perform a parameter screening study; and (3) perform a quantitative sensitivity analysis on a reduced set of parameters. Once identified, research effort should be directed to the most sensitive parameters to reduce their uncertainty bounds. This process is repeated with tightened uncertainty bounds for the sensitive parameters until the output uncertainties become acceptable. To accommodate the needs of multi-physics application, this methodology should be recursively applied to individual physics modules. The methodology is also distinguished by an efficient technique for computing parameter interactions. Details for each step will be given using simple examples. Numerical results on large scale multi-physics applications will be available in another report. Computational techniques targeted for this methodology have been implemented in a software package called PSUADE.
A comparison of two sampling methods for global sensitivity analysis
NASA Astrophysics Data System (ADS)
Tarantola, Stefano; Becker, William; Zeitz, Dirk
2012-05-01
We compare the convergence properties of two different quasi-random sampling designs - Sobol's quasi-Monte Carlo, and Latin supercube sampling in variance-based global sensitivity analysis. We use the non-monotonic V-function of Sobol' as base case-study, and compare the performance of both sampling strategies at increasing sample size and dimensionality against analytical values. The results indicate that in almost all cases investigated here, the Sobol' design performs better. This, coupled with the fact that effective Latin supercube sampling requires a priori knowledge of the interaction properties of the function, leads us to recommend Sobol' sampling in most practical cases.
A global sensitivity analysis of crop virtual water content
NASA Astrophysics Data System (ADS)
Tamea, S.; Tuninetti, M.; D'Odorico, P.; Laio, F.; Ridolfi, L.
2015-12-01
The concepts of virtual water and water footprint are becoming widely used in the scientific literature and they are proving their usefulness in a number of multidisciplinary contexts. With such growing interest a measure of data reliability (and uncertainty) is becoming pressing but, as of today, assessments of data sensitivity to model parameters, performed at the global scale, are not known. This contribution aims at filling this gap. Starting point of this study is the evaluation of the green and blue virtual water content (VWC) of four staple crops (i.e. wheat, rice, maize, and soybean) at a global high resolution scale. In each grid cell, the crop VWC is given by the ratio between the total crop evapotranspiration over the growing season and the crop actual yield, where evapotranspiration is determined with a detailed daily soil water balance and actual yield is estimated using country-based data, adjusted to account for spatial variability. The model provides estimates of the VWC at a 5x5 arc minutes and it improves on previous works by using the newest available data and including multi-cropping practices in the evaluation. The model is then used as the basis for a sensitivity analysis, in order to evaluate the role of model parameters in affecting the VWC and to understand how uncertainties in input data propagate and impact the VWC accounting. In each cell, small changes are exerted to one parameter at a time, and a sensitivity index is determined as the ratio between the relative change of VWC and the relative change of the input parameter with respect to its reference value. At the global scale, VWC is found to be most sensitive to the planting date, with a positive (direct) or negative (inverse) sensitivity index depending on the typical season of crop planting date. VWC is also markedly dependent on the length of the growing period, with an increase in length always producing an increase of VWC, but with higher spatial variability for rice than for
Global sensitivity analysis for DSMC simulations of hypersonic shocks
NASA Astrophysics Data System (ADS)
Strand, James S.; Goldstein, David B.
2013-08-01
Two global, Monte Carlo based sensitivity analyses were performed to determine which reaction rates most affect the results of Direct Simulation Monte Carlo (DSMC) simulations for a hypersonic shock in five-species air. The DSMC code was written and optimized with shock tube simulations in mind, and includes modifications to allow for the efficient simulation of a 1D hypersonic shock. The TCE model is used to convert Arrhenius-form reaction rate constants into reaction cross-sections, after modification to allow accurate modeling of reactions with arbitrarily large rates relative to the VHS collision rate. The square of the Pearson correlation coefficient was used as the measure for sensitivity in the first of the analyses, and the mutual information was used as the measure in the second. The quantity of interest (QoI) for these analyses was the NO density profile across a 1D shock at ˜8000 m/s (M∞ ≈ 23). This vector QoI was broken into a set of scalar QoIs, each representing the density of NO at a specific point downstream of the shock, and sensitivities were calculated for each scalar QoI based on both measures of sensitivity. Profiles of sensitivity vs. location downstream of the shock were then integrated to determine an overall sensitivity for each reaction. A weighting function was used in the integration in order to emphasize sensitivities in the region of greatest thermal and chemical non-equilibrium. Both sensitivity analysis methods agree on the six reactions which most strongly affect the density of NO. These six reactions are the N2 dissociation reaction N2 + N ⇄ 3N, the O2 dissociation reaction O2 + O ⇄ 3O, the NO dissociation reactions NO + N ⇄ 2N + O and NO + O ⇄ N + 2O, and the exchange reactions N2 + O ⇄ NO + N and NO + O ⇄ O2 + N. This analysis lays the groundwork for the application of Bayesian statistical methods for the calibration of parameters relevant to modeling a hypersonic shock layer with the DSMC method.
Global sensitivity analysis of the XUV-ABLATOR code
NASA Astrophysics Data System (ADS)
Nevrlý, Václav; Janku, Jaroslav; Dlabka, Jakub; Vašinek, Michal; Juha, Libor; Vyšín, Luděk.; Burian, Tomáš; Lančok, Ján.; Skřínský, Jan; Zelinger, Zdeněk.; Pira, Petr; Wild, Jan
2013-05-01
Availability of numerical model providing reliable estimation of the parameters of ablation processes induced by extreme ultraviolet laser pulses in the range of nanosecond and sub-picosecond timescales is highly desirable for recent experimental research as well as for practical purposes. Performance of the one-dimensional thermodynamic code (XUV-ABLATOR) in predicting the relationship of ablation rate and laser fluence is investigated for three reference materials: (i) silicon, (ii) fused silica and (iii) polymethyl methacrylate. The effect of pulse duration and different material properties on the model predictions is studied in the frame of this contribution for the conditions typical for two compact laser systems operating at 46.9 nm. Software implementation of the XUV-ABLATOR code including graphical user's interface and the set of tools for sensitivity analysis was developed. Global sensitivity analysis using high dimensional model representation in combination with quasi-random sampling was applied in order to identify the most critical input data as well as to explore the uncertainty range of model results.
Variability-based global sensitivity analysis of circuit response
NASA Astrophysics Data System (ADS)
Opalski, Leszek J.
2014-11-01
The research problem of interest to this paper is: how to determine efficiently and objectively the most and the least influential parameters of a multimodule electronic system - given the system model f and the module parameter variation ranges. The author investigates if existing generic global sensitivity methods are applicable for electronic circuit design, even if they were developed (and successfully applied) in quite distant engineering areas. A photodiode detector analog front-end system response time is used to reveal capability of the selected global sensitivity approaches under study.
Global sensitivity analysis of the radiative transfer model
NASA Astrophysics Data System (ADS)
Neelam, Maheshwari; Mohanty, Binayak P.
2015-04-01
With the recently launched Soil Moisture Active Passive (SMAP) mission, it is very important to have a complete understanding of the radiative transfer model for better soil moisture retrievals and to direct future research and field campaigns in areas of necessity. Because natural systems show great variability and complexity with respect to soil, land cover, topography, precipitation, there exist large uncertainties and heterogeneities in model input factors. In this paper, we explore the possibility of using global sensitivity analysis (GSA) technique to study the influence of heterogeneity and uncertainties in model inputs on zero order radiative transfer (ZRT) model and to quantify interactions between parameters. GSA technique is based on decomposition of variance and can handle nonlinear and nonmonotonic functions. We direct our analyses toward growing agricultural fields of corn and soybean in two different regions, Iowa, USA (SMEX02) and Winnipeg, Canada (SMAPVEX12). We noticed that, there exists a spatio-temporal variation in parameter interactions under different soil moisture and vegetation conditions. Radiative Transfer Model (RTM) behaves more non-linearly in SMEX02 and linearly in SMAPVEX12, with average parameter interactions of 14% in SMEX02 and 5% in SMAPVEX12. Also, parameter interactions increased with vegetation water content (VWC) and roughness conditions. Interestingly, soil moisture shows an exponentially decreasing sensitivity function whereas parameters such as root mean square height (RMS height) and vegetation water content show increasing sensitivity with 0.05 v/v increase in soil moisture range. Overall, considering the SMAPVEX12 fields to be water rich environment (due to higher observed SM) and SMEX02 fields to be energy rich environment (due to lower SM and wide ranges of TSURF), our results indicate that first order as well as interactions between the parameters change with water and energy rich environments.
Global sensitivity analysis of the Indian monsoon during the Pleistocene
NASA Astrophysics Data System (ADS)
Araya-Melo, P. A.; Crucifix, M.; Bounceur, N.
2015-01-01
The sensitivity of the Indian monsoon to the full spectrum of climatic conditions experienced during the Pleistocene is estimated using the climate model HadCM3. The methodology follows a global sensitivity analysis based on the emulator approach of Oakley and O'Hagan (2004) implemented following a three-step strategy: (1) development of an experiment plan, designed to efficiently sample a five-dimensional input space spanning Pleistocene astronomical configurations (three parameters), CO2 concentration and a Northern Hemisphere glaciation index; (2) development, calibration and validation of an emulator of HadCM3 in order to estimate the response of the Indian monsoon over the full input space spanned by the experiment design; and (3) estimation and interpreting of sensitivity diagnostics, including sensitivity measures, in order to synthesise the relative importance of input factors on monsoon dynamics, estimate the phase of the monsoon intensity response with respect to that of insolation, and detect potential non-linear phenomena. By focusing on surface temperature, precipitation, mixed-layer depth and sea-surface temperature over the monsoon region during the summer season (June-July-August-September), we show that precession controls the response of four variables: continental temperature in phase with June to July insolation, high glaciation favouring a late-phase response, sea-surface temperature in phase with May insolation, continental precipitation in phase with July insolation, and mixed-layer depth in antiphase with the latter. CO2 variations control temperature variance with an amplitude similar to that of precession. The effect of glaciation is dominated by the albedo forcing, and its effect on precipitation competes with that of precession. Obliquity is a secondary effect, negligible on most variables except sea-surface temperature. It is also shown that orography forcing reduces the glacial cooling, and even has a positive effect on precipitation
Simulation of the global contrail radiative forcing: A sensitivity analysis
NASA Astrophysics Data System (ADS)
Yi, Bingqi; Yang, Ping; Liou, Kuo-Nan; Minnis, Patrick; Penner, Joyce E.
2012-12-01
The contrail radiative forcing induced by human aviation activity is one of the most uncertain contributions to climate forcing. An accurate estimation of global contrail radiative forcing is imperative, and the modeling approach is an effective and prominent method to investigate the sensitivity of contrail forcing to various potential factors. We use a simple offline model framework that is particularly useful for sensitivity studies. The most-up-to-date Community Atmospheric Model version 5 (CAM5) is employed to simulate the atmosphere and cloud conditions during the year 2006. With updated natural cirrus and additional contrail optical property parameterizations, the RRTMG Model (RRTM-GCM application) is used to simulate the global contrail radiative forcing. Global contrail coverage and optical depth derived from the literature for the year 2002 is used. The 2006 global annual averaged contrail net (shortwave + longwave) radiative forcing is estimated to be 11.3 mW m-2. Regional contrail radiative forcing over dense air traffic areas can be more than ten times stronger than the global average. A series of sensitivity tests are implemented and show that contrail particle effective size, contrail layer height, the model cloud overlap assumption, and contrail optical properties are among the most important factors. The difference between the contrail forcing under all and clear skies is also shown.
Global sensitivity analysis in control-augmented structural synthesis
NASA Technical Reports Server (NTRS)
Bloebaum, Christina L.
1989-01-01
In this paper, an integrated approach to structural/control design is proposed in which variables in both the passive (structural) and active (control) disciplines of an optimization process are changed simultaneously. The global sensitivity equation (GSE) method of Sobieszczanski-Sobieski (1988) is used to obtain the behavior sensitivity derivatives necessary for the linear approximations used in the parallel multidisciplinary synthesis problem. The GSE allows for the decoupling of large systems into smaller subsystems and thus makes it possible to determine the local sensitivities of each subsystem's outputs to its inputs and parameters. The advantages in using the GSE method are demonstrated using a finite-element representation of a truss structure equipped with active lateral displacement controllers, which is undergoing forced vibration.
A global sensitivity analysis for African sleeping sickness
DAVIS, STEPHEN; AKSOY, SERAP; GALVANI, ALISON
2012-01-01
SUMMARY African sleeping sickness is a parasitic disease transmitted through the bites of tsetse flies of the genus Glossina. We constructed mechanistic models for the basic reproduction number, R0, of Trypanosoma brucei gambiense and Trypanosoma brucei rhodesiense, respectively the causative agents of West and East African human sleeping sickness. We present global sensitivity analyses of these models that rank the importance of the biological parameters that may explain variation in R0, using parameter ranges based on literature, field data and expertize out of Uganda. For West African sleeping sickness, our results indicate that the proportion of bloodmeals taken from humans by Glossina fuscipes fuscipes is the most important factor, suggesting that differences in the exposure of humans to tsetse are fundamental to the distribution of T. b. gambiense. The second ranked parameter for T. b. gambiense and the highest ranked for T. b. rhodesiense was the proportion of Glossina refractory to infection. This finding underlines the possible implications of recent work showing that nutritionally stressed tsetse are more susceptible to trypanosome infection, and provides broad support for control strategies in development that are aimed at increasing refractoriness in tsetse flies. We note though that for T. b. rhodesiense the population parameters for tsetse – species composition, survival and abundance – were ranked almost as highly as the proportion refractory, and that the model assumed regular treatment of livestock with trypanocides as an established practice in the areas of Uganda experiencing East African sleeping sickness. PMID:21078220
How to assess the Efficiency and "Uncertainty" of Global Sensitivity Analysis?
NASA Astrophysics Data System (ADS)
Haghnegahdar, Amin; Razavi, Saman
2016-04-01
Sensitivity analysis (SA) is an important paradigm for understanding model behavior, characterizing uncertainty, improving model calibration, etc. Conventional "global" SA (GSA) approaches are rooted in different philosophies, resulting in different and sometime conflicting and/or counter-intuitive assessment of sensitivity. Moreover, most global sensitivity techniques are highly computationally demanding to be able to generate robust and stable sensitivity metrics over the entire model response surface. Accordingly, a novel sensitivity analysis method called Variogram Analysis of Response Surfaces (VARS) is introduced to overcome the aforementioned issues. VARS uses the Variogram concept to efficiently provide a comprehensive assessment of global sensitivity across a range of scales within the parameter space. Based on the VARS principles, in this study we present innovative ideas to assess (1) the efficiency of GSA algorithms and (2) the level of confidence we can assign to a sensitivity assessment. We use multiple hydrological models with different levels of complexity to explain the new ideas.
Global sensitivity analysis of analytical vibroacoustic transmission models
NASA Astrophysics Data System (ADS)
Christen, Jean-Loup; Ichchou, Mohamed; Troclet, Bernard; Bareille, Olivier; Ouisse, Morvan
2016-04-01
Noise reduction issues arise in many engineering problems. One typical vibroacoustic problem is the transmission loss (TL) optimisation and control. The TL depends mainly on the mechanical parameters of the considered media. At early stages of the design, such parameters are not well known. Decision making tools are therefore needed to tackle this issue. In this paper, we consider the use of the Fourier Amplitude Sensitivity Test (FAST) for the analysis of the impact of mechanical parameters on features of interest. FAST is implemented with several structural configurations. FAST method is used to estimate the relative influence of the model parameters while assuming some uncertainty or variability on their values. The method offers a way to synthesize the results of a multiparametric analysis with large variability. Results are presented for transmission loss of isotropic, orthotropic and sandwich plates excited by a diffuse field on one side. Qualitative trends found to agree with the physical expectation. Design rules can then be set up for vibroacoustic indicators. The case of a sandwich plate is taken as an example of the use of this method inside an optimisation process and for uncertainty quantification.
A new variance-based global sensitivity analysis technique
NASA Astrophysics Data System (ADS)
Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen
2013-11-01
A new set of variance-based sensitivity indices, called W-indices, is proposed. Similar to the Sobol's indices, both main and total effect indices are defined. The W-main effect indices measure the average reduction of model output variance when the ranges of a set of inputs are reduced, and the total effect indices quantify the average residual variance when the ranges of the remaining inputs are reduced. Geometrical interpretations show that the W-indices gather the full information of the variance ratio function, whereas, Sobol's indices only reflect the marginal information. Then the double-loop-repeated-set Monte Carlo (MC) (denoted as DLRS MC) procedure, the double-loop-single-set MC (denoted as DLSS MC) procedure and the model emulation procedure are introduced for estimating the W-indices. It is shown that the DLRS MC procedure is suitable for computing all the W-indices despite its highly computational cost. The DLSS MC procedure is computationally efficient, however, it is only applicable for computing low order indices. The model emulation is able to estimate all the W-indices with low computational cost as long as the model behavior is correctly captured by the emulator. The Ishigami function, a modified Sobol's function and two engineering models are utilized for comparing the W- and Sobol's indices and verifying the efficiency and convergence of the three numerical methods. Results show that, for even an additive model, the W-total effect index of one input may be significantly larger than its W-main effect index. This indicates that there may exist interaction effects among the inputs of an additive model when their distribution ranges are reduced.
Tang, Zhang-Chun; Zhenzhou, Lu; Zhiwen, Liu; Ningcong, Xiao
2015-01-01
There are various uncertain parameters in the techno-economic assessments (TEAs) of biodiesel production, including capital cost, interest rate, feedstock price, maintenance rate, biodiesel conversion efficiency, glycerol price and operating cost. However, fewer studies focus on the influence of these parameters on TEAs. This paper investigated the effects of these parameters on the life cycle cost (LCC) and the unit cost (UC) in the TEAs of biodiesel production. The results show that LCC and UC exhibit variations when involving uncertain parameters. Based on the uncertainty analysis, three global sensitivity analysis (GSA) methods are utilized to quantify the contribution of an individual uncertain parameter to LCC and UC. The GSA results reveal that the feedstock price and the interest rate produce considerable effects on the TEAs. These results can provide a useful guide for entrepreneurs when they plan plants. PMID:25459861
NASA Astrophysics Data System (ADS)
Benson, James; Ziehn, Tilo; Dixon, Nick S.; Tomlin, Alison S.
In this work global sensitivity studies using Monte Carlo sampling and high dimensional model representations (HDMR) have been carried out on the k- ɛ closure computational fluid dynamic (CFD) model MISKAM, allowing detailed representation of the effects of changing input parameters on the model outputs. The scenario studied is that of a complex street canyon in the city of York, UK. The sensitivity of the turbulence and mean flow fields to the input parameters is detailed both at specific measurement points and in the associated canyon cross-section to aid comparison with field data. This analysis gives insight into how model parameters can influence the predicted outputs. It also shows the relative strength of each parameter in its influence. Four main input parameters are addressed. Three parameters are surface roughness lengths, determining the flow over a surface, and the fourth is the background wind direction. In order to determine the relative importance of each parameter, sensitivity indices are calculated for the canyon cross-section. The sensitivity of the flow structures in and above the canyon to each parameter is found to be very location dependant. In general, at a particular measurement point, it is the closest wall surface that is most influential on the model output. However, due to the complexity of the flow at different wind angles this is not always the case, for example when a re-circulating canyon flow pattern is present. The background wind direction is shown to be an important parameter as it determines the surface features encountered by the flow. The accuracy with which this is specified when modelling a full-scale situation is therefore an important consideration when considering model uncertainty. Overall, the uncertainty due to roughness lengths is small in comparison to the mean outputs, indicating that the model is well defined even with large ranges of input parameter uncertainty.
NASA Astrophysics Data System (ADS)
Lee, L. A.; Carslaw, K. S.; Pringle, K. J.
2012-04-01
Global aerosol contributions to radiative forcing (and hence climate change) are persistently subject to large uncertainty in successive Intergovernmental Panel on Climate Change (IPCC) reports (Schimel et al., 1996; Penner et al., 2001; Forster et al., 2007). As such more complex global aerosol models are being developed to simulate aerosol microphysics in the atmosphere. The uncertainty in global aerosol model estimates is currently estimated by measuring the diversity amongst different models (Textor et al., 2006, 2007; Meehl et al., 2007). The uncertainty at the process level due to the need to parameterise in such models is not yet understood and it is difficult to know whether the added model complexity comes at a cost of high model uncertainty. In this work the model uncertainty and its sources due to the uncertain parameters is quantified using variance-based sensitivity analysis. Due to the complexity of a global aerosol model we use Gaussian process emulation with a sufficient experimental design to make such as a sensitivity analysis possible. The global aerosol model used here is GLOMAP (Mann et al., 2010) and we quantify the sensitivity of numerous model outputs to 27 expertly elicited uncertain model parameters describing emissions and processes such as growth and removal of aerosol. Using the R package DiceKriging (Roustant et al., 2010) along with the package sensitivity (Pujol, 2008) it has been possible to produce monthly global maps of model sensitivity to the uncertain parameters over the year 2008. Global model outputs estimated by the emulator are shown to be consistent with previously published estimates (Spracklen et al. 2010, Mann et al. 2010) but now we have an associated measure of parameter uncertainty and its sources. It can be seen that globally some parameters have no effect on the model predictions and any further effort in their development may be unnecessary, although a structural error in the model might also be identified. The
Sin, Gürkan; Gernaey, Krist V; Neumann, Marc B; van Loosdrecht, Mark C M; Gujer, Willi
2011-01-01
This study demonstrates the usefulness of global sensitivity analysis in wastewater treatment plant (WWTP) design to prioritize sources of uncertainty and quantify their impact on performance criteria. The study, which is performed with the Benchmark Simulation Model no. 1 plant design, complements a previous paper on input uncertainty characterisation and propagation (Sin et al., 2009). A sampling-based sensitivity analysis is conducted to compute standardized regression coefficients. It was found that this method is able to decompose satisfactorily the variance of plant performance criteria (with R(2) > 0.9) for effluent concentrations, sludge production and energy demand. This high extent of linearity means that the plant performance criteria can be described as linear functions of the model inputs under the defined plant conditions. In effect, the system of coupled ordinary differential equations can be replaced by multivariate linear models, which can be used as surrogate models. The importance ranking based on the sensitivity measures demonstrates that the most influential factors involve ash content and influent inert particulate COD among others, largely responsible for the uncertainty in predicting sludge production and effluent ammonium concentration. While these results were in agreement with process knowledge, the added value is that the global sensitivity methods can quantify the contribution of the variance of significant parameters, e.g., ash content explains 70% of the variance in sludge production. Further the importance of formulating appropriate sensitivity analysis scenarios that match the purpose of the model application needs to be highlighted. Overall, the global sensitivity analysis proved a powerful tool for explaining and quantifying uncertainties as well as providing insight into devising useful ways for reducing uncertainties in the plant performance. This information can help engineers design robust WWTP plants. PMID:20828785
Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Accuracy Analysis
NASA Astrophysics Data System (ADS)
Sarrazin, F.; Pianosi, F.; Hartmann, A. J.; Wagener, T.
2014-12-01
Sensitivity analysis aims to characterize the impact that changes in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). It is a valuable diagnostic tool for model understanding and for model improvement, it enhances calibration efficiency, and it supports uncertainty and scenario analysis. It is of particular interest for environmental models because they are often complex, non-linear, non-monotonic and exhibit strong interactions between their parameters. However, sensitivity analysis has to be carefully implemented to produce reliable results at moderate computational cost. For example, sample size can have a strong impact on the results and has to be carefully chosen. Yet, there is little guidance available for this step in environmental modelling. The objective of the present study is to provide guidelines for a robust sensitivity analysis, in order to support modellers in making appropriate choices for its implementation and in interpreting its outcome. We considered hydrological models with increasing level of complexity. We tested four sensitivity analysis methods, Regional Sensitivity Analysis, Method of Morris, a density-based (PAWN) and a variance-based (Sobol) method. The convergence and variability of sensitivity indices were investigated. We used bootstrapping to assess and improve the robustness of sensitivity indices even for limited sample sizes. Finally, we propose a quantitative validation approach for sensitivity analysis based on the Kolmogorov-Smirnov statistics.
NASA Astrophysics Data System (ADS)
Dai, Heng; Ye, Ming
2015-09-01
Sensitivity analysis is a vital tool in hydrological modeling to identify influential parameters for inverse modeling and uncertainty analysis, and variance-based global sensitivity analysis has gained popularity. However, the conventional global sensitivity indices are defined with consideration of only parametric uncertainty. Based on a hierarchical structure of parameter, model, and scenario uncertainties and on recently developed techniques of model- and scenario-averaging, this study derives new global sensitivity indices for multiple models and multiple scenarios. To reduce computational cost of variance-based global sensitivity analysis, sparse grid collocation method is used to evaluate the mean and variance terms involved in the variance-based global sensitivity analysis. In a simple synthetic case of groundwater flow and reactive transport, it is demonstrated that the global sensitivity indices vary substantially between the four models and three scenarios. Not considering the model and scenario uncertainties, might result in biased identification of important model parameters. This problem is resolved by using the new indices defined for multiple models and/or multiple scenarios. This is particularly true when the sensitivity indices and model/scenario probabilities vary substantially. The sparse grid collocation method dramatically reduces the computational cost, in comparison with the popular quasi-random sampling method. The new framework of global sensitivity analysis is mathematically general, and can be applied to a wide range of hydrologic and environmental problems.
NASA Astrophysics Data System (ADS)
Werisch, Stefan; Krause, Julia
2014-05-01
Complex environmental models which are able to consider the dynamic interactions between plants, soils and the environment are suitable tools to predict the impact of climate variability and climate change on the water budget of small catchments. Unfortunately increases the number of potential calibration parameters with increasing complexity of these models. Methods of global sensitivity analysis (GSA) are considered as helpful tools to identify the sensitive and therefore relevant model parameters which need to be considered in the optimization process. To assess the efficiency of these approaches, three different methods for GSA of model parameters, namely: (1) Mutual Entropy (ME), (2) Regional Sensitivity Analysis and (3) enhanced Fourier Amplitude Sensitivity Test (eFAST) have been tested and compared using the complex environmental model SWAP. The model was set up to simulate the water budget and soil water dynamics of a small experimental catchment in the Ore Mountains, Germany. Discharge and soil water content time series established the data basis for the sensitivity analysis. All three methods have been applied to investigate the sensitivity of the model parameters regarding the different data types, different model efficiency measures and different time resolutions for the calculation of the efficiency measures. The results indicate that GSA methods from which only the first order sensitivities, this means the sole influence of a specific parameter on the model output, can be obtained (ME & RSA) are unsuitable for complex environmental models. They identified less than 20% of the model parameters to be sensitive, while almost 80% of the model parameters were identified as sensitive on the basis of the total sensitivity index calculated by the eFAST method. Possible reasons for the failure of the first-order methods are the strong interactions of the parameters and the non-linear behavior of the model. A second important result of this study is that
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
NASA Astrophysics Data System (ADS)
Urrego-Blanco, Jorge R.; Urban, Nathan M.; Hunke, Elizabeth C.; Turner, Adrian K.; Jeffery, Nicole
2016-04-01
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. It is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.
A Methodology For Performing Global Uncertainty And Sensitivity Analysis In Systems Biology
Marino, Simeone; Hogue, Ian B.; Ray, Christian J.; Kirschner, Denise E.
2008-01-01
Accuracy of results from mathematical and computer models of biological systems is often complicated by the presence of uncertainties in experimental data that are used to estimate parameter values. Current mathematical modeling approaches typically use either single-parameter or local sensitivity analyses. However, these methods do not accurately assess uncertainty and sensitivity in the system as, by default they hold all other parameters fixed at baseline values. Using techniques described within we demonstrate how a multi-dimensional parameter space can be studied globally so all uncertainties can be identified. Further, uncertainty and sensitivity analysis techniques can help to identify and ultimately control uncertainties. In this work we develop methods for applying existing analytical tools to perform analyses on a variety of mathematical and computer models. We compare two specific types of global sensitivity analysis indexes that have proven to be among the most robust and efficient. Through familiar and new examples of mathematical and computer models, we provide a complete methodology for performing these analyses, both in deterministic and stochastic settings, and propose novel techniques to handle problems encountered during this type of analyses. PMID:18572196
NASA Astrophysics Data System (ADS)
Crucifix, M.; Araya-Melo, P. A.
2013-12-01
The sensitivity of terrestrial climates of the southern hemisphere to astronomical forcing, CO2 and glaciation level is systematically investigated following a global sensitivity analysis. The approach is founded on analysis of about 100 experiments performed with the GCM HadCM3, statistically analysed using a Gaussian Process emulator. The presentation emphasises the importance of the selection of experiments (experiment design) and the validation of the statistical model. At the stage of writing the abstract only preliminary results have been obtained, but following an approach followed by our group for the Indian Monsoon and vegetation feedback analysis we expect to be able to show and discuss amplitude and phase relationships between changes in terrestrial environments of the southern hemisphere and driving factors.
Quantitative global sensitivity analysis of the RZWQM to warrant a robust and effective calibration
NASA Astrophysics Data System (ADS)
Esmaeili, Sara; Thomson, Neil R.; Tolson, Bryan A.; Zebarth, Bernie J.; Kuchta, Shawn H.; Neilsen, Denise
2014-04-01
Sensitivity analysis is a useful tool to identify key model parameters as well as to quantify simulation errors resulting from parameter uncertainty. The Root Zone Water Quality Model (RZWQM) has been subjected to various sensitivity analyses; however, in most of these efforts a local sensitivity analysis method was implemented, the nonlinear response was neglected, and the dependency among parameters was not examined. In this study we employed a comprehensive global sensitivity analysis to quantify the contribution of 70 model input parameters (including 35 hydrological parameters and 35 nitrogen cycle parameters) on the uncertainty of key RZWQM outputs relevant to raspberry row crops in Abbotsford, BC, Canada. Specifically, 9 model outputs that capture various vertical-spatial and temporal domains were investigated. A rank transformation method was used to account for the nonlinear behavior of the model. The variance of the model outputs was decomposed into correlated and uncorrelated partial variances to provide insight into parameter dependency and interaction. The results showed that, in general, the field capacity (soil water content at -33 kPa) in upper 30 cm of the soil horizon had the greatest contribution (>30%) to the estimate of the water flux and evapotranspiration uncertainty. The most influential parameters affecting the simulation of soil nitrate content, mineralization, denitrification, nitrate leaching and plant nitrogen uptake were the transient coefficient of fast to intermediate humus pool, the carbon to nitrogen ratio of the fast humus pool, the organic matter decay rate in fast humus pool, and field capacity. The correlated contribution to the model output uncertainty was <10% for the set of parameters investigated. The findings from this effort were utilized in two calibration case studies to demonstrate the utility of this global sensitivity analysis to reduce the risk of over-parameterization, and to identify the vertical location of
NASA Astrophysics Data System (ADS)
Ren, Luchuan
2015-04-01
A Global Sensitivity Analysis Method on Maximum Tsunami Wave Heights to Potential Seismic Source Parameters Luchuan Ren, Jianwei Tian, Mingli Hong Institute of Disaster Prevention, Sanhe, Heibei Province, 065201, P.R. China It is obvious that the uncertainties of the maximum tsunami wave heights in offshore area are partly from uncertainties of the potential seismic tsunami source parameters. A global sensitivity analysis method on the maximum tsunami wave heights to the potential seismic source parameters is put forward in this paper. The tsunami wave heights are calculated by COMCOT ( the Cornell Multi-grid Coupled Tsunami Model), on the assumption that an earthquake with magnitude MW8.0 occurred at the northern fault segment along the Manila Trench and triggered a tsunami in the South China Sea. We select the simulated results of maximum tsunami wave heights at specific sites in offshore area to verify the validity of the method proposed in this paper. For ranking importance order of the uncertainties of potential seismic source parameters (the earthquake's magnitude, the focal depth, the strike angle, dip angle and slip angle etc..) in generating uncertainties of the maximum tsunami wave heights, we chose Morris method to analyze the sensitivity of the maximum tsunami wave heights to the aforementioned parameters, and give several qualitative descriptions of nonlinear or linear effects of them on the maximum tsunami wave heights. We quantitatively analyze the sensitivity of the maximum tsunami wave heights to these parameters and the interaction effects among these parameters on the maximum tsunami wave heights by means of the extended FAST method afterward. The results shows that the maximum tsunami wave heights are very sensitive to the earthquake magnitude, followed successively by the epicenter location, the strike angle and dip angle, the interactions effect between the sensitive parameters are very obvious at specific site in offshore area, and there
Robles, A; Ruano, M V; Ribes, J; Seco, A; Ferrer, J
2014-04-01
The results of a global sensitivity analysis of a filtration model for submerged anaerobic MBRs (AnMBRs) are assessed in this paper. This study aimed to (1) identify the less- (or non-) influential factors of the model in order to facilitate model calibration and (2) validate the modelling approach (i.e. to determine the need for each of the proposed factors to be included in the model). The sensitivity analysis was conducted using a revised version of the Morris screening method. The dynamic simulations were conducted using long-term data obtained from an AnMBR plant fitted with industrial-scale hollow-fibre membranes. Of the 14 factors in the model, six were identified as influential, i.e. those calibrated using off-line protocols. A dynamic calibration (based on optimisation algorithms) of these influential factors was conducted. The resulting estimated model factors accurately predicted membrane performance. PMID:24650614
Global sensitivity analysis of ozone, HO2, and OH during ARCTAS campaign
NASA Astrophysics Data System (ADS)
Christian, K. E.; Mao, J.; Brune, W. H.
2015-12-01
Modeling the chemical state of the atmosphere is a complicated endeavor due to the complex, non-linear interactions between meteorology, emissions, and kinetics that govern trace gas concentrations. Given the rapid environmental changes taking place, the Arctic is one area of particular interest with regards to climate and atmospheric composition. To observe these changes to the Arctic atmosphere, NASA funded the Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS) campaign (2008). As part of the mission, measurements of oxidative factors (hydroxyl (OH) and hydroperoxyl (HO2) abundances) were taken using the Airborne Tropospheric Hydrogen Oxides Sensor (ATHOS) aboard the NASA DC-8. Using GEOS-Chem, a popular global chemical transport model, we perform a global sensitivity analysis for the period of the ARCTAS campaign, allowing for non-linear interactions between input factors to be accounted and quantified in the analysis. Sensitivities are determined for around 50 model input factors and for combinations of pairs of input factors using the Random Sampling - High Dimensional Model Representation (RS-HDMR) method. We calculate the uncertainty in these oxidative factors, and in ozone, ozone production rate, and hydroxyl production rate and find the sensitivity of these oxidative factors and the differences between the measured and modeled oxidative factors to model inputs in meteorology, emissions, and chemistry. This presentation will include a solid estimate of GEOS-Chem model uncertainty for the period of the ARCTAS campaign, the emissions, meteorology, or chemistry to which oxidative properties are most sensitive for these periods, and the factors to which the differences between the modeled and measured oxidative factors are most sensitive.
SAFE(R): A Matlab/Octave Toolbox (and R Package) for Global Sensitivity Analysis
NASA Astrophysics Data System (ADS)
Pianosi, Francesca; Sarrazin, Fanny; Gollini, Isabella; Wagener, Thorsten
2015-04-01
Global Sensitivity Analysis (GSA) is increasingly used in the development and assessment of hydrological models, as well as for dominant control analysis and for scenario discovery to support water resource management under deep uncertainty. Here we present a toolbox for the application of GSA, called SAFE (Sensitivity Analysis For Everybody) that implements several established GSA methods, including method of Morris, Regional Sensitivity Analysis, variance-based sensitivity Analysis (Sobol') and FAST. It also includes new approaches and visualization tools to complement these established methods. The Toolbox is released in two versions, one running under Matlab/Octave (called SAFE) and one running in R (called SAFER). Thanks to its modular structure, SAFE(R) can be easily integrated with other toolbox and packages, and with models running in a different computing environment. Another interesting feature of SAFE(R) is that all the implemented methods include specific functions for assessing the robustness and convergence of the sensitivity estimates. Furthermore, SAFE(R) includes numerous visualisation tools for the effective investigation and communication of GSA results. The toolbox is designed to make GSA accessible to non-specialist users, and to provide a fully commented code for more experienced users to complement their own tools. The documentation includes a set of workflow scripts with practical guidelines on how to apply GSA and how to use the toolbox. SAFE(R) is open source and freely available from the following website: http://bristol.ac.uk/cabot/resources/safe-toolbox/ Ultimately, SAFE(R) aims at improving the diffusion and quality of GSA practice in the hydrological modelling community.
Toward a more robust variance-based global sensitivity analysis of model outputs
Tong, C
2007-10-15
Global sensitivity analysis (GSA) measures the variation of a model output as a function of the variations of the model inputs given their ranges. In this paper we consider variance-based GSA methods that do not rely on certain assumptions about the model structure such as linearity or monotonicity. These variance-based methods decompose the output variance into terms of increasing dimensionality called 'sensitivity indices', first introduced by Sobol' [25]. Sobol' developed a method of estimating these sensitivity indices using Monte Carlo simulations. McKay [13] proposed an efficient method using replicated Latin hypercube sampling to compute the 'correlation ratios' or 'main effects', which have been shown to be equivalent to Sobol's first-order sensitivity indices. Practical issues with using these variance estimators are how to choose adequate sample sizes and how to assess the accuracy of the results. This paper proposes a modified McKay main effect method featuring an adaptive procedure for accuracy assessment and improvement. We also extend our adaptive technique to the computation of second-order sensitivity indices. Details of the proposed adaptive procedure as wells as numerical results are included in this paper.
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin
2015-04-01
Earth and Environmental Systems (EES) models are essential components of research, development, and decision-making in science and engineering disciplines. With continuous advances in understanding and computing power, such models are becoming more complex with increasingly more factors to be specified (model parameters, forcings, boundary conditions, etc.). To facilitate better understanding of the role and importance of different factors in producing the model responses, the procedure known as 'Sensitivity Analysis' (SA) can be very helpful. Despite the availability of a large body of literature on the development and application of various SA approaches, two issues continue to pose major challenges: (1) Ambiguous Definition of Sensitivity - Different SA methods are based in different philosophies and theoretical definitions of sensitivity, and can result in different, even conflicting, assessments of the underlying sensitivities for a given problem, (2) Computational Cost - The cost of carrying out SA can be large, even excessive, for high-dimensional problems and/or computationally intensive models. In this presentation, we propose a new approach to sensitivity analysis that addresses the dual aspects of 'effectiveness' and 'efficiency'. By effective, we mean achieving an assessment that is both meaningful and clearly reflective of the objective of the analysis (the first challenge above), while by efficiency we mean achieving statistically robust results with minimal computational cost (the second challenge above). Based on this approach, we develop a 'global' sensitivity analysis framework that efficiently generates a newly-defined set of sensitivity indices that characterize a range of important properties of metric 'response surfaces' encountered when performing SA on EES models. Further, we show how this framework embraces, and is consistent with, a spectrum of different concepts regarding 'sensitivity', and that commonly-used SA approaches (e.g., Sobol
Global Sensitivity Analysis and Parameter Calibration for an Ecosystem Carbon Model
NASA Astrophysics Data System (ADS)
Safta, C.; Ricciuto, D. M.; Sargsyan, K.; Najm, H. N.; Debusschere, B.; Thornton, P. E.
2013-12-01
We present uncertainty quantification results for a process-based ecosystem carbon model. The model employs 18 parameters and is driven by meteorological data corresponding to years 1992-2006 at the Harvard Forest site. Daily Net Ecosystem Exchange (NEE) observations were available to calibrate the model parameters and test the performance of the model. Posterior distributions show good predictive capabilities for the calibrated model. A global sensitivity analysis was first performed to determine the important model parameters based on their contribution to the variance of NEE. We then proceed to calibrate the model parameters in a Bayesian framework. The daily discrepancies between measured and predicted NEE values were modeled as independent and identically distributed Gaussians with prescribed daily variance according to the recorded instrument error. All model parameters were assumed to have uninformative priors with bounds set according to expert opinion. The global sensitivity results show that the rate of leaf fall (LEAFALL) is responsible for approximately 25% of the total variance in the average NEE for 1992-2005. A set of 4 other parameters, Nitrogen use efficiency (NUE), base rate for maintenance respiration (BR_MR), growth respiration fraction (RG_FRAC), and allocation to plant stem pool (ASTEM) contribute between 5% and 12% to the variance in average NEE, while the rest of the parameters have smaller contributions. The posterior distributions, sampled with a Markov Chain Monte Carlo algorithm, exhibit significant correlations between model parameters. However LEAFALL, the most important parameter for the average NEE, is not informed by the observational data, while less important parameters show significant updates between their prior and posterior densities. The Fisher information matrix values, indicating which parameters are most informed by the experimental observations, are examined to augment the comparison between the calibration and global
NASA Astrophysics Data System (ADS)
Cottereau, R.; Rochinha, F. A.; Coutinho, A. L. G. A.
2014-08-01
This paper describes a global sensitivity analysis of a fractal-based turbulence-induced flocculation model. The quantities of interest in this analysis are related to the floc diameters in two different configurations. The input parameters with which the sensitivity analyses are performed are the floc aggregation and breakup parameters, the fractal dimension and the diameter of the primary particles. Two related versions of the flocculation model are considered, evenly encountered in the literature: (i) using a dimensional floc breakup parameter, and (ii) using a non-dimensional floc breakup parameter. The main results of the sensitivity analyses are that only two parameters of model (ii) are significant (aggregation and breakup parameters) and that the relationships between parameter and quantity of interest remain simple. Contrarily, with model (i), all parameters have to be considered. When identifying model parameters based on measures of floc diameters, this analysis hence suggests the use of model (ii) rather than (i). Further, improved models of the fractal dimension do not seem to be required when using the non-dimensional model (ii).
NASA Astrophysics Data System (ADS)
Vanrolleghem, Peter A.; Mannina, Giorgio; Cosenza, Alida; Neumann, Marc B.
2015-03-01
Sensitivity analysis represents an important step in improving the understanding and use of environmental models. Indeed, by means of global sensitivity analysis (GSA), modellers may identify both important (factor prioritisation) and non-influential (factor fixing) model factors. No general rule has yet been defined for verifying the convergence of the GSA methods. In order to fill this gap this paper presents a convergence analysis of three widely used GSA methods (SRC, Extended FAST and Morris screening) for an urban drainage stormwater quality-quantity model. After the convergence was achieved the results of each method were compared. In particular, a discussion on peculiarities, applicability, and reliability of the three methods is presented. Moreover, a graphical Venn diagram based classification scheme and a precise terminology for better identifying important, interacting and non-influential factors for each method is proposed. In terms of convergence, it was shown that sensitivity indices related to factors of the quantity model achieve convergence faster. Results for the Morris screening method deviated considerably from the other methods. Factors related to the quality model require a much higher number of simulations than the number suggested in literature for achieving convergence with this method. In fact, the results have shown that the term "screening" is improperly used as the method may exclude important factors from further analysis. Moreover, for the presented application the convergence analysis shows more stable sensitivity coefficients for the Extended-FAST method compared to SRC and Morris screening. Substantial agreement in terms of factor fixing was found between the Morris screening and Extended FAST methods. In general, the water quality related factors exhibited more important interactions than factors related to water quantity. Furthermore, in contrast to water quantity model outputs, water quality model outputs were found to be
A Protocol for the Global Sensitivity Analysis of Impact Assessment Models in Life Cycle Assessment.
Cucurachi, S; Borgonovo, E; Heijungs, R
2016-02-01
The life cycle assessment (LCA) framework has established itself as the leading tool for the assessment of the environmental impact of products. Several works have established the need of integrating the LCA and risk analysis methodologies, due to the several common aspects. One of the ways to reach such integration is through guaranteeing that uncertainties in LCA modeling are carefully treated. It has been claimed that more attention should be paid to quantifying the uncertainties present in the various phases of LCA. Though the topic has been attracting increasing attention of practitioners and experts in LCA, there is still a lack of understanding and a limited use of the available statistical tools. In this work, we introduce a protocol to conduct global sensitivity analysis in LCA. The article focuses on the life cycle impact assessment (LCIA), and particularly on the relevance of global techniques for the development of trustable impact assessment models. We use a novel characterization model developed for the quantification of the impacts of noise on humans as a test case. We show that global SA is fundamental to guarantee that the modeler has a complete understanding of: (i) the structure of the model and (ii) the importance of uncertain model inputs and the interaction among them. PMID:26595377
Reduction and Uncertainty Analysis of Chemical Mechanisms Based on Local and Global Sensitivities
NASA Astrophysics Data System (ADS)
Esposito, Gaetano
Numerical simulations of critical reacting flow phenomena in hypersonic propulsion devices require accurate representation of finite-rate chemical kinetics. The chemical kinetic models available for hydrocarbon fuel combustion are rather large, involving hundreds of species and thousands of reactions. As a consequence, they cannot be used in multi-dimensional computational fluid dynamic calculations in the foreseeable future due to the prohibitive computational cost. In addition to the computational difficulties, it is also known that some fundamental chemical kinetic parameters of detailed models have significant level of uncertainty due to limited experimental data available and to poor understanding of interactions among kinetic parameters. In the present investigation, local and global sensitivity analysis techniques are employed to develop a systematic approach of reducing and analyzing detailed chemical kinetic models. Unlike previous studies in which skeletal model reduction was based on the separate analysis of simple cases, in this work a novel strategy based on Principal Component Analysis of local sensitivity values is presented. This new approach is capable of simultaneously taking into account all the relevant canonical combustion configurations over different composition, temperature and pressure conditions. Moreover, the procedure developed in this work represents the first documented inclusion of non-premixed extinction phenomena, which is of great relevance in hypersonic combustors, in an automated reduction algorithm. The application of the skeletal reduction to a detailed kinetic model consisting of 111 species in 784 reactions is demonstrated. The resulting reduced skeletal model of 37--38 species showed that the global ignition/propagation/extinction phenomena of ethylene-air mixtures can be predicted within an accuracy of 2% of the full detailed model. The problems of both understanding non-linear interactions between kinetic parameters and
A new framework for comprehensive, robust, and efficient global sensitivity analysis: 2. Application
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin V.
2016-01-01
Based on the theoretical framework for sensitivity analysis called "Variogram Analysis of Response Surfaces" (VARS), developed in the companion paper, we develop and implement a practical "star-based" sampling strategy (called STAR-VARS), for the application of VARS to real-world problems. We also develop a bootstrap approach to provide confidence level estimates for the VARS sensitivity metrics and to evaluate the reliability of inferred factor rankings. The effectiveness, efficiency, and robustness of STAR-VARS are demonstrated via two real-data hydrological case studies (a 5-parameter conceptual rainfall-runoff model and a 45-parameter land surface scheme hydrology model), and a comparison with the "derivative-based" Morris and "variance-based" Sobol approaches are provided. Our results show that STAR-VARS provides reliable and stable assessments of "global" sensitivity across the full range of scales in the factor space, while being 1-2 orders of magnitude more efficient than the Morris or Sobol approaches.
NASA Technical Reports Server (NTRS)
Davies, Misty D.; Gundy-Burlet, Karen
2010-01-01
A useful technique for the validation and verification of complex flight systems is Monte Carlo Filtering -- a global sensitivity analysis that tries to find the inputs and ranges that are most likely to lead to a subset of the outputs. A thorough exploration of the parameter space for complex integrated systems may require thousands of experiments and hundreds of controlled and measured variables. Tools for analyzing this space often have limitations caused by the numerical problems associated with high dimensionality and caused by the assumption of independence of all of the dimensions. To combat both of these limitations, we propose a technique that uses a combination of the original variables with the derived variables obtained during a principal component analysis.
Global sensitivity analysis of a dynamic model for gene expression in Drosophila embryos
McCarthy, Gregory D.; Drewell, Robert A.
2015-01-01
It is well known that gene regulation is a tightly controlled process in early organismal development. However, the roles of key processes involved in this regulation, such as transcription and translation, are less well understood, and mathematical modeling approaches in this field are still in their infancy. In recent studies, biologists have taken precise measurements of protein and mRNA abundance to determine the relative contributions of key factors involved in regulating protein levels in mammalian cells. We now approach this question from a mathematical modeling perspective. In this study, we use a simple dynamic mathematical model that incorporates terms representing transcription, translation, mRNA and protein decay, and diffusion in an early Drosophila embryo. We perform global sensitivity analyses on this model using various different initial conditions and spatial and temporal outputs. Our results indicate that transcription and translation are often the key parameters to determine protein abundance. This observation is in close agreement with the experimental results from mammalian cells for various initial conditions at particular time points, suggesting that a simple dynamic model can capture the qualitative behavior of a gene. Additionally, we find that parameter sensitivites are temporally dynamic, illustrating the importance of conducting a thorough global sensitivity analysis across multiple time points when analyzing mathematical models of gene regulation. PMID:26157608
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin V.
2015-05-01
Sensitivity analysis is an essential paradigm in Earth and Environmental Systems modeling. However, the term "sensitivity" has a clear definition, based in partial derivatives, only when specified locally around a particular point (e.g., optimal solution) in the problem space. Accordingly, no unique definition exists for "global sensitivity" across the problem space, when considering one or more model responses to different factors such as model parameters or forcings. A variety of approaches have been proposed for global sensitivity analysis, based on different philosophies and theories, and each of these formally characterizes a different "intuitive" understanding of sensitivity. These approaches focus on different properties of the model response at a fundamental level and may therefore lead to different (even conflicting) conclusions about the underlying sensitivities. Here we revisit the theoretical basis for sensitivity analysis, summarize and critically evaluate existing approaches in the literature, and demonstrate their flaws and shortcomings through conceptual examples. We also demonstrate the difficulty involved in interpreting "global" interaction effects, which may undermine the value of existing interpretive approaches. With this background, we identify several important properties of response surfaces that are associated with the understanding and interpretation of sensitivities in the context of Earth and Environmental System models. Finally, we highlight the need for a new, comprehensive framework for sensitivity analysis that effectively characterizes all of the important sensitivity-related properties of model response surfaces.
A Global Analysis of CYP51 Diversity and Azole Sensitivity in Rhynchosporium commune.
Brunner, Patrick C; Stefansson, Tryggvi S; Fountaine, James; Richina, Veronica; McDonald, Bruce A
2016-04-01
CYP51 encodes the target site of the azole class of fungicides widely used in plant protection. Some ascomycete pathogens carry two CYP51 paralogs called CYP51A and CYP51B. A recent analysis of CYP51 sequences in 14 European isolates of the barley scald pathogen Rhynchosporium commune revealed three CYP51 paralogs, CYP51A, CYP51B, and a pseudogene called CYP51A-p. The same analysis showed that CYP51A exhibits a presence/absence polymorphism, with lower sensitivity to azole fungicides associated with the presence of a functional CYP51A. We analyzed a global collection of nearly 400 R. commune isolates to determine if these findings could be extended beyond Europe. Our results strongly support the hypothesis that CYP51A played a key role in the emergence of azole resistance globally and provide new evidence that the CYP51A gene in R. commune has further evolved, presumably in response to azole exposure. We also present evidence for recent long-distance movement of evolved CYP51A alleles, highlighting the risk associated with movement of fungicide resistance alleles among international trading partners. PMID:26623995
NASA Astrophysics Data System (ADS)
Safta, C.; Ricciuto, D. M.; Sargsyan, K.; Debusschere, B.; Najm, H. N.; Williams, M.; Thornton, P. E.
2015-07-01
In this paper we propose a probabilistic framework for an uncertainty quantification (UQ) study of a carbon cycle model and focus on the comparison between steady-state and transient simulation setups. A global sensitivity analysis (GSA) study indicates the parameters and parameter couplings that are important at different times of the year for quantities of interest (QoIs) obtained with the data assimilation linked ecosystem carbon (DALEC) model. We then employ a Bayesian approach and a statistical model error term to calibrate the parameters of DALEC using net ecosystem exchange (NEE) observations at the Harvard Forest site. The calibration results are employed in the second part of the paper to assess the predictive skill of the model via posterior predictive checks.
Designing novel cellulase systems through agent-based modeling and global sensitivity analysis
Apte, Advait A; Senger, Ryan S; Fong, Stephen S
2014-01-01
Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement. PMID:24830736
NASA Astrophysics Data System (ADS)
Rohmer, J.; Foerster, E.
2012-04-01
Large-scale landslide prediction is typically based on numerical modeling, with computer codes generally involving a large number of input parameters. Addressing the influence of each of them on the final result and providing a ranking procedure may be useful for risk management purposes, especially to guide future lab or in site characterizations and studies, but also to simplify the model by fixing the input parameters, which have negligible influence. Variance-based global sensitivity analysis relying on the Sobol' indices can provide such valuable information and presents the advantages of exploring the sensitivity to input parameters over their whole range of variation (i.e. in a global manner), of fully accounting for possible interaction between them and of being applicable without introducing a priori assumptions on the mathematical formulation of the landslide model. Nevertheless, such analysis require a large number of computer code simulations (typically a thousand), which appears impracticable for computationally demanding simulations, with computation times ranging from several hours to several days. To overcome this difficulty, we propose a ''meta-model''-based strategy consisting in replacing the complex simulator by a "costless-to-evaluate" statistical approximation (i.e. emulator) provided by a Gaussian-Process (GP) model. This allows computation of sensitivity measures from a limited number of simulations. This meta-modelling strategy is demonstrated on two cases. The first application is a simple analytical model based on the infinite slope analysis, which allows to compare the sensitivity measures computed using the ''true'' model with those computed using the GP meta-model. The second application aims at ranking in terms of importance the properties of the elasto-plastic model describing the complex behaviour of the slip surface in the "La Frasse" landslide (Switzerland). This case is more challenging as a single simulation requires at least 4
Spatial heterogeneity and sensitivity analysis of crop virtual water content at a global scale
NASA Astrophysics Data System (ADS)
Tuninetti, Marta; Tamea, Stefania; D'Odorico, Paolo; Laio, Francesco; Ridolfi, Luca
2015-04-01
In this study, the green and blue virtual water content (VWC) of four staple crops (i.e., wheat, rice, maize, and soybean) is quantified at a high resolution scale, for the period 1996-2005, and a sensitivity analysis is performed for model parameters. In each grid cell, the crop VWC is obtained by the ratio between the total crop evapotranspiration over the growing season and the crop actual yield. The evapotranspiration is determined with a daily soil water balance that takes into account crop and soil properties, production conditions, and climate. The actual yield is estimated using country-based values provided by the FAOSTAT database multiplied by a coefficient adjusting for the spatial variability within countries. The model improves on previous works by using the newest available data and including multi-cropping practices in the evaluation. The overall water use (blue+green) for the global production of the four grains investigated is 2673 km3/yr. Food production almost entirely depends on green water (>90%), but, when applied, irrigation makes production more water efficient, thus requiring lower VWC. The spatial variability of the virtual water content is partly driven by the yield pattern with an average correlation coefficient of 0.83, and partly by reference evapotranspiration with correlation coefficient of 0.27. Wheat shows the highest spatial variability since it is grown under a wide range of climatic conditions, soil properties, and agricultural practices. The sensitivity analysis is performed to understand how uncertainties in input data propagate and impact the virtual water content accounting. In each cell fixed changes are introduced to one input parameters at a time, and a sensitivity index, SI, is determined as the ratio between the variation of VWC referred to its baseline value and the variation of the input parameter with respect to its reference value. VWC is found to be most sensitive to planting date (PD), followed by the length of
Maximising the value of computer experiments using multi-method global sensitivity analysis
NASA Astrophysics Data System (ADS)
Pianosi, F.; Iwema, J.; Rosolem, R.; Wagener, T.
2015-12-01
Global Sensitivity Analysis (GSA) is increasingly recognised as an essential technique for a structured and quantitative approach to the calibration and diagnostic evaluation of environmental models. However, the implementation and interpretation of GSA is complicated by a number of choices that users need to make and for which multiple, equally sensible, options are often available. These choices include in the first place the choice of the GSA method, as well as many implementation details like the definition of the sampling space and strategy. The issue is exacerbated by computational complexity, in terms of both computing time and storage space needed to run the model, which might strongly constrain the number of experiments that can be afforded. While several algorithmic improvements can be adopted to reduce the computing burden of specific GSA methods, in this talk we discuss how a multi-method approach can be established to maximise the information gathered from an individual sample of model evaluations. Using as an example the GSA of a land surface model, we show how different analytical and approximation techniques can be applied sequentially to the same sample of model inputs and outputs, providing complimentary information about the model behaviour from different angles, and allowing for testing the impact of the choices made to generate the sample. We further expand our analysis to show how GSA is interconnected with model calibration and uncertainty analysis, so that a careful design of the simulation experiment can be used to address different questions simultaneously.
Younes, A; Delay, F; Fajraoui, N; Fahs, M; Mara, T A
2016-08-01
The concept of dual flowing continuum is a promising approach for modeling solute transport in porous media that includes biofilm phases. The highly dispersed transit time distributions often generated by these media are taken into consideration by simply stipulating that advection-dispersion transport occurs through both the porous and the biofilm phases. Both phases are coupled but assigned with contrasting hydrodynamic properties. However, the dual flowing continuum suffers from intrinsic equifinality in the sense that the outlet solute concentration can be the result of several parameter sets of the two flowing phases. To assess the applicability of the dual flowing continuum, we investigate how the model behaves with respect to its parameters. For the purpose of this study, a Global Sensitivity Analysis (GSA) and a Statistical Calibration (SC) of model parameters are performed for two transport scenarios that differ by the strength of interaction between the flowing phases. The GSA is shown to be a valuable tool to understand how the complex system behaves. The results indicate that the rate of mass transfer between the two phases is a key parameter of the model behavior and influences the identifiability of the other parameters. For weak mass exchanges, the output concentration is mainly controlled by the velocity in the porous medium and by the porosity of both flowing phases. In the case of large mass exchanges, the kinetics of this exchange also controls the output concentration. The SC results show that transport with large mass exchange between the flowing phases is more likely affected by equifinality than transport with weak exchange. The SC also indicates that weakly sensitive parameters, such as the dispersion in each phase, can be accurately identified. Removing them from calibration procedures is not recommended because it might result in biased estimations of the highly sensitive parameters. PMID:27182791
NASA Astrophysics Data System (ADS)
Younes, A.; Delay, F.; Fajraoui, N.; Fahs, M.; Mara, T. A.
2016-08-01
The concept of dual flowing continuum is a promising approach for modeling solute transport in porous media that includes biofilm phases. The highly dispersed transit time distributions often generated by these media are taken into consideration by simply stipulating that advection-dispersion transport occurs through both the porous and the biofilm phases. Both phases are coupled but assigned with contrasting hydrodynamic properties. However, the dual flowing continuum suffers from intrinsic equifinality in the sense that the outlet solute concentration can be the result of several parameter sets of the two flowing phases. To assess the applicability of the dual flowing continuum, we investigate how the model behaves with respect to its parameters. For the purpose of this study, a Global Sensitivity Analysis (GSA) and a Statistical Calibration (SC) of model parameters are performed for two transport scenarios that differ by the strength of interaction between the flowing phases. The GSA is shown to be a valuable tool to understand how the complex system behaves. The results indicate that the rate of mass transfer between the two phases is a key parameter of the model behavior and influences the identifiability of the other parameters. For weak mass exchanges, the output concentration is mainly controlled by the velocity in the porous medium and by the porosity of both flowing phases. In the case of large mass exchanges, the kinetics of this exchange also controls the output concentration. The SC results show that transport with large mass exchange between the flowing phases is more likely affected by equifinality than transport with weak exchange. The SC also indicates that weakly sensitive parameters, such as the dispersion in each phase, can be accurately identified. Removing them from calibration procedures is not recommended because it might result in biased estimations of the highly sensitive parameters.
Lumen, Annie; McNally, Kevin; George, Nysia; Fisher, Jeffrey W.; Loizou, George D.
2015-01-01
A deterministic biologically based dose-response model for the thyroidal system in a near-term pregnant woman and the fetus was recently developed to evaluate quantitatively thyroid hormone perturbations. The current work focuses on conducting a quantitative global sensitivity analysis on this complex model to identify and characterize the sources and contributions of uncertainties in the predicted model output. The workflow and methodologies suitable for computationally expensive models, such as the Morris screening method and Gaussian Emulation processes, were used for the implementation of the global sensitivity analysis. Sensitivity indices, such as main, total and interaction effects, were computed for a screened set of the total thyroidal system descriptive model input parameters. Furthermore, a narrower sub-set of the most influential parameters affecting the model output of maternal thyroid hormone levels were identified in addition to the characterization of their overall and pair-wise parameter interaction quotients. The characteristic trends of influence in model output for each of these individual model input parameters over their plausible ranges were elucidated using Gaussian Emulation processes. Through global sensitivity analysis we have gained a better understanding of the model behavior and performance beyond the domains of observation by the simultaneous variation in model inputs over their range of plausible uncertainties. The sensitivity analysis helped identify parameters that determine the driving mechanisms of the maternal and fetal iodide kinetics, thyroid function and their interactions, and contributed to an improved understanding of the system modeled. We have thus demonstrated the use and application of global sensitivity analysis for a biologically based dose-response model for sensitive life-stages such as pregnancy that provides richer information on the model and the thyroidal system modeled compared to local sensitivity analysis
A comparison of five forest interception models using global sensitivity and uncertainty analysis
NASA Astrophysics Data System (ADS)
Linhoss, Anna C.; Siegert, Courtney M.
2016-07-01
Interception by the forest canopy plays a critical role in the hydrologic cycle by removing a significant portion of incoming precipitation from the terrestrial component. While there are a number of existing physical models of forest interception, few studies have summarized or compared these models. The objective of this work is to use global sensitivity and uncertainty analysis to compare five mechanistic interception models including the Rutter, Rutter Sparse, Gash, Sparse Gash, and Liu models. Using parameter probability distribution functions of values from the literature, our results show that on average storm duration [Dur], gross precipitation [PG], canopy storage [S] and solar radiation [Rn] are the most important model parameters. On the other hand, empirical parameters used in calculating evaporation and drip (i.e. trunk evaporation as a proportion of evaporation from the saturated canopy [ɛ], the empirical drainage parameter [b], the drainage partitioning coefficient [pd], and the rate of water dripping from the canopy when canopy storage has been reached [Ds]) have relatively low levels of importance in interception modeling. As such, future modeling efforts should aim to decompose parameters that are the most influential in determining model outputs into easily measurable physical components. Because this study compares models, the choices regarding the parameter probability distribution functions are applied across models, which enables a more definitive ranking of model uncertainty.
NASA Astrophysics Data System (ADS)
Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.
2014-12-01
Physically based models provide insights into key hydrologic processes, but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology. Here we employ global sensitivity analysis to explore how different error types (i.e., bias, random errors), different error distributions, and different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use Sobol' global sensitivity analysis, which is typically used for model parameters, but adapted here for testing model sensitivity to co-existing errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 520 000 Monte Carlo simulations across four sites and four different scenarios. Model outputs were generally (1) more sensitive to forcing biases than random errors, (2) less sensitive to forcing error distributions, and (3) sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a significant impact depending on forcing error magnitudes. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.
Sun, Huaiwei; Zhu, Yan; Yang, Jinzhong; Wang, Xiugui
2015-11-01
As the amount of water resources that can be utilized for agricultural production is limited, the reuse of treated wastewater (TWW) for irrigation is a practical solution to alleviate the water crisis in China. The process-based models, which estimate nitrogen dynamics under irrigation, are widely used to investigate the best irrigation and fertilization management practices in developed and developing countries. However, for modeling such a complex system for wastewater reuse, it is critical to conduct a sensitivity analysis to determine numerous input parameters and their interactions that contribute most to the variance of the model output for the development of process-based model. In this study, application of a comprehensive global sensitivity analysis for nitrogen dynamics was reported. The objective was to compare different global sensitivity analysis (GSA) on the key parameters for different model predictions of nitrogen and crop growth modules. The analysis was performed as two steps. Firstly, Morris screening method, which is one of the most commonly used screening method, was carried out to select the top affected parameters; then, a variance-based global sensitivity analysis method (extended Fourier amplitude sensitivity test, EFAST) was used to investigate more thoroughly the effects of selected parameters on model predictions. The results of GSA showed that strong parameter interactions exist in crop nitrogen uptake, nitrogen denitrification, crop yield, and evapotranspiration modules. Among all parameters, one of the soil physical-related parameters named as the van Genuchten air entry parameter showed the largest sensitivity effects on major model predictions. These results verified that more effort should be focused on quantifying soil parameters for more accurate model predictions in nitrogen- and crop-related predictions, and stress the need to better calibrate the model in a global sense. This study demonstrates the advantages of the GSA on a
Global Sensitivity Analysis for Large-scale Socio-hydrological Models using the Cloud
NASA Astrophysics Data System (ADS)
Hu, Y.; Garcia-Cabrejo, O.; Cai, X.; Valocchi, A. J.; Dupont, B.
2014-12-01
In the context of coupled human and natural system (CHNS), incorporating human factors into water resource management provides us with the opportunity to understand the interactions between human and environmental systems. A multi-agent system (MAS) model is designed to couple with the physically-based Republican River Compact Administration (RRCA) groundwater model, in an attempt to understand the declining water table and base flow in the heavily irrigated Republican River basin. For MAS modelling, we defined five behavioral parameters (κ_pr, ν_pr, κ_prep, ν_prep and λ) to characterize the agent's pumping behavior given the uncertainties of the future crop prices and precipitation. κ and ν describe agent's beliefs in their prior knowledge of the mean and variance of crop prices (κ_pr, ν_pr) and precipitation (κ_prep, ν_prep), and λ is used to describe the agent's attitude towards the fluctuation of crop profits. Notice that these human behavioral parameters as inputs to the MAS model are highly uncertain and even not measurable. Thus, we estimate the influences of these behavioral parameters on the coupled models using Global Sensitivity Analysis (GSA). In this paper, we address two main challenges arising from GSA with such a large-scale socio-hydrological model by using Hadoop-based Cloud Computing techniques and Polynomial Chaos Expansion (PCE) based variance decomposition approach. As a result, 1,000 scenarios of the coupled models are completed within two hours with the Hadoop framework, rather than about 28days if we run those scenarios sequentially. Based on the model results, GSA using PCE is able to measure the impacts of the spatial and temporal variations of these behavioral parameters on crop profits and water table, and thus identifies two influential parameters, κ_pr and λ. The major contribution of this work is a methodological framework for the application of GSA in large-scale socio-hydrological models. This framework attempts to
Global sensitivity analysis of a flocculation model for turbidity currents f
NASA Astrophysics Data System (ADS)
Rochinha, F. A.; Coutinho, A. L.; Cottereau, R.
2013-05-01
and breakup coefficients, fractal dimension, and primary particle diameter), the first three of which are particularly difficult to measure experi- mentally. Several authors have tried to observe the influence of these parameters on some quantities of interest in flocculation experiments, by modifying the values of the parameters one by one around reference values. This type of local sensitiv- ity analysis provides some insight but is not sufficient when the parameters vary over several orders of magnitude. We propose in this presentation to describe a global sensitivity analysis of this flocculation model. The input distributions for the parameters are chosen based on an extensive data set from the literature. The global sensitivity analysis is performed using the Sobol and FAST methods and aims at observing the influence of the parameters on two quantities of inter- est: (i) the equilibrium diameter of the flocs, that can be computed analytically, and (ii) a maximum floc size in a 1D tidal forcing experiment.
The analysis sensitivity to tropical winds from the Global Weather Experiment
NASA Technical Reports Server (NTRS)
Paegle, J.; Paegle, J. N.; Baker, W. E.
1986-01-01
The global scale divergent and rotational flow components of the Global Weather Experiment (GWE) are diagnosed from three different analyses of the data. The rotational flow shows closer agreement between the analyses than does the divergent flow. Although the major outflow and inflow centers are similarly placed in all analyses, the global kinetic energy of the divergent wind varies by about a factor of 2 between different analyses while the global kinetic energy of the rotational wind varies by only about 10 percent between the analyses. A series of real data assimilation experiments has been performed with the GLA general circulation model using different amounts of tropical wind data during the First Special Observing Period of the Global Weather Experiment. In exeriment 1, all available tropical wind data were used; in the second experiment, tropical wind data were suppressed; while, in the third and fourth experiments, only tropical wind data with westerly and easterly components, respectively, were assimilated. The rotational wind appears to be more sensitive to the presence or absence of tropical wind data than the divergent wind. It appears that the model, given only extratropical observations, generates excessively strong upper tropospheric westerlies. These biases are sufficiently pronounced to amplify the globally integrated rotational flow kinetic energy by about 10 percent and the global divergent flow kinetic energy by about a factor of 2. Including only easterly wind data in the tropics is more effective in controlling the model error than including only westerly wind data. This conclusion is especially noteworthy because approximately twice as many upper tropospheric westerly winds were available in these cases as easterly winds.
NASA Astrophysics Data System (ADS)
Munoz-Carpena, R.; Muller, S. J.; Chu, M.; Kiker, G. A.; Perz, S. G.
2014-12-01
Model Model complexity resulting from the need to integrate environmental system components cannot be understated. In particular, additional emphasis is urgently needed on rational approaches to guide decision making through uncertainties surrounding the integrated system across decision-relevant scales. However, in spite of the difficulties that the consideration of modeling uncertainty represent for the decision process, it should not be avoided or the value and science behind the models will be undermined. These two issues; i.e., the need for coupled models that can answer the pertinent questions and the need for models that do so with sufficient certainty, are the key indicators of a model's relevance. Model relevance is inextricably linked with model complexity. Although model complexity has advanced greatly in recent years there has been little work to rigorously characterize the threshold of relevance in integrated and complex models. Formally assessing the relevance of the model in the face of increasing complexity would be valuable because there is growing unease among developers and users of complex models about the cumulative effects of various sources of uncertainty on model outputs. In particular, this issue has prompted doubt over whether the considerable effort going into further elaborating complex models will in fact yield the expected payback. New approaches have been proposed recently to evaluate the uncertainty-complexity-relevance modeling trilemma (Muller, Muñoz-Carpena and Kiker, 2011) by incorporating state-of-the-art global sensitivity and uncertainty analysis (GSA/UA) in every step of the model development so as to quantify not only the uncertainty introduced by the addition of new environmental components, but the effect that these new components have over existing components (interactions, non-linear responses). Outputs from the analysis can also be used to quantify system resilience (stability, alternative states, thresholds or tipping
A new framework for comprehensive, robust, and efficient global sensitivity analysis: 1. Theory
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin V.
2016-01-01
Computer simulation models are continually growing in complexity with increasingly more factors to be identified. Sensitivity Analysis (SA) provides an essential means for understanding the role and importance of these factors in producing model responses. However, conventional approaches to SA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we present a new and general sensitivity analysis framework (called VARS), based on an analogy to "variogram analysis," that provides an intuitive and comprehensive characterization of sensitivity across the full spectrum of scales in the factor space. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices can be computed as by-products of the VARS framework. Synthetic functions that resemble actual model response surfaces are used to illustrate the concepts, and show VARS to be as much as two orders of magnitude more computationally efficient than the state-of-the-art Sobol approach. In a companion paper, we propose a practical implementation strategy, and demonstrate the effectiveness, efficiency, and reliability (robustness) of the VARS framework on real-data case studies.
NASA Astrophysics Data System (ADS)
Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.
2015-07-01
Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.
Gul, R; Bernhard, S
2015-11-01
In computational cardiovascular models, parameters are one of major sources of uncertainty, which make the models unreliable and less predictive. In order to achieve predictive models that allow the investigation of the cardiovascular diseases, sensitivity analysis (SA) can be used to quantify and reduce the uncertainty in outputs (pressure and flow) caused by input (electrical and structural) model parameters. In the current study, three variance based global sensitivity analysis (GSA) methods; Sobol, FAST and a sparse grid stochastic collocation technique based on the Smolyak algorithm were applied on a lumped parameter model of carotid bifurcation. Sensitivity analysis was carried out to identify and rank most sensitive parameters as well as to fix less sensitive parameters at their nominal values (factor fixing). In this context, network location and temporal dependent sensitivities were also discussed to identify optimal measurement locations in carotid bifurcation and optimal temporal regions for each parameter in the pressure and flow waves, respectively. Results show that, for both pressure and flow, flow resistance (R), diameter (d) and length of the vessel (l) are sensitive within right common carotid (RCC), right internal carotid (RIC) and right external carotid (REC) arteries, while compliance of the vessels (C) and blood inertia (L) are sensitive only at RCC. Moreover, Young's modulus (E) and wall thickness (h) exhibit less sensitivities on pressure and flow at all locations of carotid bifurcation. Results of network location and temporal variabilities revealed that most of sensitivity was found in common time regions i.e. early systole, peak systole and end systole. PMID:26367184
NASA Astrophysics Data System (ADS)
Citro, V.; Giannetti, F.; Luchini, P.; Auteri, F.
2015-08-01
We study the full three-dimensional instability mechanism past a hemispherical roughness element immersed in a laminar Blasius boundary layer. The inherent three-dimensional flow pattern beyond the Hopf bifurcation is characterized by coherent vortical structures usually called hairpin vortices. Direct numerical simulation results are used to analyze the formation and the shedding of hairpin vortices inside the shear layer. The first bifurcation is investigated by global-stability tools. We show the spatial structure of the linear direct and adjoint global eigenmodes of the linearized Navier-Stokes equations and use the structural-sensitivity field to locate the region where the instability mechanism acts. The core of this instability is found to be symmetric and spatially localized in the region immediately downstream of the roughness element. The effect of the variation of the ratio between the obstacle height k and the boundary layer thickness δk ∗ is also considered. The resulting bifurcation scenario is found to agree well with previous experimental investigations. A limit regime for k / δk ∗ < 1 . 5 is attained where the critical Reynolds number is almost constant, Rek ≈ 580. This result indicates that, in these conditions, the only important parameter identifying the bifurcation is the unperturbed (i.e., without the roughness element) velocity slope at the wall.
El Habachi, Aimad; Moissenet, Florent; Duprey, Sonia; Cheze, Laurence; Dumas, Raphaël
2015-07-01
Sensitivity analysis is a typical part of biomechanical models evaluation. For lower limb multi-body models, sensitivity analyses have been mainly performed on musculoskeletal parameters, more rarely on the parameters of the joint models. This study deals with a global sensitivity analysis achieved on a lower limb multi-body model that introduces anatomical constraints at the ankle, tibiofemoral, and patellofemoral joints. The aim of the study was to take into account the uncertainty of parameters (e.g. 2.5 cm on the positions of the skin markers embedded in the segments, 5° on the orientation of hinge axis, 2.5 mm on the origin and insertion of ligaments) using statistical distributions and propagate it through a multi-body optimisation method used for the computation of joint kinematics from skin markers during gait. This will allow us to identify the most influential parameters on the minimum of the objective function of the multi-body optimisation (i.e. the sum of the squared distances between measured and model-determined skin marker positions) and on the joint angles and displacements. To quantify this influence, a Fourier-based algorithm of global sensitivity analysis coupled with a Latin hypercube sampling is used. This sensitivity analysis shows that some parameters of the motor constraints, that is to say the distances between measured and model-determined skin marker positions, and the kinematic constraints are highly influencing the joint kinematics obtained from the lower limb multi-body model, for example, positions of the skin markers embedded in the shank and pelvis, parameters of the patellofemoral hinge axis, and parameters of the ankle and tibiofemoral ligaments. The resulting standard deviations on the joint angles and displacements reach 36° and 12 mm. Therefore, personalisation, customisation or identification of these most sensitive parameters of the lower limb multi-body models may be considered as essential. PMID:25783762
The global burden of disease in 1990: summary results, sensitivity analysis and future directions.
Murray, C. J.; Lopez, A. D.; Jamison, D. T.
1994-01-01
A basic requirement for evaluating the cost-effectiveness of health interventions is a comprehensive assessment of the amount of ill health (premature death and disability) attributable to specific diseases and injuries. A new indicator, the number of disability-adjusted life years (DALYs), was developed to assess the burden of disease and injury in 1990 for over 100 causes by age, sex and region. The DALY concept provides an integrative, comprehensive methodology to capture the entire amount of ill health which will, on average, be incurred during one's lifetime because of new cases of disease and injury in 1990. It differs in many respects from previous attempts at global and regional health situation assessment which have typically been much less comprehensive in scope, less detailed, and limited to a handful of causes. This paper summarizes the DALY estimates for 1990 by cause, age, sex and region. For the first time, those responsible for deciding priorities in the health sector have access to a disaggregated set of estimates which, in addition to facilitating cost-effectiveness analysis, can be used to monitor global and regional health progress for over a hundred conditions. The paper also shows how the estimates depend on particular values of the parameters involved in the calculation. PMID:8062404
NASA Astrophysics Data System (ADS)
Herman, J. D.; Kollat, J. B.; Reed, P. M.; Wagener, T.
2013-04-01
The increase in spatially distributed hydrologic modeling warrants a corresponding increase in diagnostic methods capable of analyzing complex models with large numbers of parameters. Sobol' sensitivity analysis has proven to be a valuable tool for diagnostic analyses of hydrologic models. However, for many spatially distributed models, the Sobol' method requires a prohibitive number of model evaluations to reliably decompose output variance across the full set of parameters. We investigate the potential of the method of Morris, a screening-based sensitivity approach, to provide results sufficiently similar to those of the Sobol' method at a greatly reduced computational expense. The methods are benchmarked on the Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM) model over a six-month period in the Blue River Watershed, Oklahoma, USA. The Sobol' method required over six million model evaluations to ensure reliable sensitivity indices, corresponding to more than 30 000 computing hours and roughly 180 gigabytes of storage space. We find that the method of Morris is able to correctly identify sensitive and insensitive parameters with 300 times fewer model evaluations, requiring only 100 computing hours and 1 gigabyte of storage space. Method of Morris proves to be a promising diagnostic approach for global sensitivity analysis of highly parameterized, spatially distributed hydrologic models.
NASA Astrophysics Data System (ADS)
Herman, J. D.; Kollat, J. B.; Reed, P. M.; Wagener, T.
2013-07-01
The increase in spatially distributed hydrologic modeling warrants a corresponding increase in diagnostic methods capable of analyzing complex models with large numbers of parameters. Sobol' sensitivity analysis has proven to be a valuable tool for diagnostic analyses of hydrologic models. However, for many spatially distributed models, the Sobol' method requires a prohibitive number of model evaluations to reliably decompose output variance across the full set of parameters. We investigate the potential of the method of Morris, a screening-based sensitivity approach, to provide results sufficiently similar to those of the Sobol' method at a greatly reduced computational expense. The methods are benchmarked on the Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM) over a six-month period in the Blue River watershed, Oklahoma, USA. The Sobol' method required over six million model evaluations to ensure reliable sensitivity indices, corresponding to more than 30 000 computing hours and roughly 180 gigabytes of storage space. We find that the method of Morris is able to correctly screen the most and least sensitive parameters with 300 times fewer model evaluations, requiring only 100 computing hours and 1 gigabyte of storage space. The method of Morris proves to be a promising diagnostic approach for global sensitivity analysis of highly parameterized, spatially distributed hydrologic models.
NASA Astrophysics Data System (ADS)
Savage, James; Pianosi, Francesca; Bates, Paul; Freer, Jim; Wagener, Thorsten
2015-04-01
Predicting flood inundation extents using hydraulic models is subject to a number of critical uncertainties. For a specific event, these uncertainties are known to have a large influence on model outputs and any subsequent analyses made by risk managers. Hydraulic modellers often approach such problems by applying uncertainty analysis techniques such as the Generalised Likelihood Uncertainty Estimation (GLUE) methodology. However, these methods do not allow one to attribute which source of uncertainty has the most influence on the various model outputs that inform flood risk decision making. Another issue facing modellers is the amount of computational resource that is available to spend on modelling flood inundations that are 'fit for purpose' to the modelling objectives. Therefore a balance needs to be struck between computation time, realism and spatial resolution, and effectively characterising the uncertainty spread of predictions (for example from boundary conditions and model parameterisations). However, it is not fully understood how much of an impact each factor has on model performance, for example how much influence changing the spatial resolution of a model has on inundation predictions in comparison to other uncertainties inherent in the modelling process. Furthermore, when resampling fine scale topographic data in the form of a Digital Elevation Model (DEM) to coarser resolutions, there are a number of possible coarser DEMs that can be produced. Deciding which DEM is then chosen to represent the surface elevations in the model could also influence model performance. In this study we model a flood event using the hydraulic model LISFLOOD-FP and apply Sobol' Sensitivity Analysis to estimate which input factor, among the uncertainty in model boundary conditions, uncertain model parameters, the spatial resolution of the DEM and the choice of resampled DEM, have the most influence on a range of model outputs. These outputs include whole domain maximum
NASA Technical Reports Server (NTRS)
Bittker, David A.
1996-01-01
A generalized version of the NASA Lewis general kinetics code, LSENS, is described. The new code allows the use of global reactions as well as molecular processes in a chemical mechanism. The code also incorporates the capability of performing sensitivity analysis calculations for a perfectly stirred reactor rapidly and conveniently at the same time that the main kinetics calculations are being done. The GLSENS code has been extensively tested and has been found to be accurate and efficient. Nine example problems are presented and complete user instructions are given for the new capabilities. This report is to be used in conjunction with the documentation for the original LSENS code.
Lee, Yeonok; Wu, Hulin
2012-01-01
Differential equation models are widely used for the study of natural phenomena in many fields. The study usually involves unknown factors such as initial conditions and/or parameters. It is important to investigate the impact of unknown factors (parameters and initial conditions) on model outputs in order to better understand the system the model represents. Apportioning the uncertainty (variation) of output variables of a model according to the input factors is referred to as sensitivity analysis. In this paper, we focus on the global sensitivity analysis of ordinary differential equation (ODE) models over a time period using the multivariate adaptive regression spline (MARS) as a meta model based on the concept of the variance of conditional expectation (VCE). We suggest to evaluate the VCE analytically using the MARS model structure of univariate tensor-product functions which is more computationally efficient. Our simulation studies show that the MARS model approach performs very well and helps to significantly reduce the computational cost. We present an application example of sensitivity analysis of ODE models for influenza infection to further illustrate the usefulness of the proposed method. PMID:21656089
NASA Astrophysics Data System (ADS)
Shahkarami, Pirouz; Liu, Longcheng; Moreno, Luis; Neretnieks, Ivars
2015-01-01
This study presents an analytical approach to simulate nuclide migration through a channel in a fracture accounting for an arbitrary-length decay chain. The nuclides are retarded as they diffuse in the porous rock matrix and stagnant zones in the fracture. The Laplace transform and similarity transform techniques are applied to solve the model. The analytical solution to the nuclide concentrations at the fracture outlet is governed by nine parameters representing different mechanisms acting on nuclide transport through a fracture, including diffusion into the rock matrices, diffusion into the stagnant water zone, chain decay and hydrodynamic dispersion. Furthermore, to assess how sensitive the results are to parameter uncertainties, the Sobol method is applied in variance-based global sensitivity analyses of the model output. The Sobol indices show how uncertainty in the model output is apportioned to the uncertainty in the model input. This method takes into account both direct effects and interaction effects between input parameters. The simulation results suggest that in the case of pulse injections, ignoring the effect of a stagnant water zone can lead to significant errors in the time of first arrival and the peak value of the nuclides. Likewise, neglecting the parent and modeling its daughter as a single stable species can result in a significant overestimation of the peak value of the daughter nuclide. It is also found that as the dispersion increases, the early arrival time and the peak time of the daughter decrease while the peak value increases. More importantly, the global sensitivity analysis reveals that for time periods greater than a few thousand years, the uncertainty of the model output is more sensitive to the values of the individual parameters than to the interaction between them. Moreover, if one tries to evaluate the true values of the input parameters at the same cost and effort, the determination of priorities should follow a certain
Baumuratova, Tatiana; Dobre, Simona; Bastogne, Thierry; Sauter, Thomas
2013-01-01
Systems with bifurcations may experience abrupt irreversible and often unwanted shifts in their performance, called critical transitions. For many systems like climate, economy, ecosystems it is highly desirable to identify indicators serving as early warnings of such regime shifts. Several statistical measures were recently proposed as early warnings of critical transitions including increased variance, autocorrelation and skewness of experimental or model-generated data. The lack of automatized tool for model-based prediction of critical transitions led to designing DyGloSA - a MATLAB toolbox for dynamical global parameter sensitivity analysis (GPSA) of ordinary differential equations models. We suggest that the switch in dynamics of parameter sensitivities revealed by our toolbox is an early warning that a system is approaching a critical transition. We illustrate the efficiency of our toolbox by analyzing several models with bifurcations and predicting the time periods when systems can still avoid going to a critical transition by manipulating certain parameter values, which is not detectable with the existing SA techniques. DyGloSA is based on the SBToolbox2 and contains functions, which compute dynamically the global sensitivity indices of the system by applying four main GPSA methods: eFAST, Sobol's ANOVA, PRCC and WALS. It includes parallelized versions of the functions enabling significant reduction of the computational time (up to 12 times). DyGloSA is freely available as a set of MATLAB scripts at http://bio.uni.lu/systems_biology/software/dyglosa. It requires installation of MATLAB (versions R2008b or later) and the Systems Biology Toolbox2 available at www.sbtoolbox2.org. DyGloSA can be run on Windows and Linux systems, -32 and -64 bits. PMID:24367574
NASA Astrophysics Data System (ADS)
Muneepeerakul, Chitsomanus; Huffaker, Ray; Munoz-Carpena, Rafael
2016-04-01
The weather index insurance promises financial resilience to farmers struck by harsh weather conditions with swift compensation at affordable premium thanks to its minimal adverse selection and moral hazard. Despite these advantages, the very nature of indexing causes the presence of "production basis risk" that the selected weather indexes and their thresholds do not correspond to actual damages. To reduce basis risk without additional data collection cost, we propose the use of rain intensity and frequency as indexes as it could offer better protection at the lower premium by avoiding basis risk-strike trade-off inherent in the total rainfall index. We present empirical evidences and modeling results that even under the similar cumulative rainfall and temperature environment, yield can significantly differ especially for drought sensitive crops. We further show that deriving the trigger level and payoff function from regression between historical yield and total rainfall data may pose significant basis risk owing to their non-unique relationship in the insured range of rainfall. Lastly, we discuss the design of index insurance in terms of contract specifications based on the results from global sensitivity analysis.
Making sense of global sensitivity analyses
NASA Astrophysics Data System (ADS)
Wainwright, Haruko M.; Finsterle, Stefan; Jung, Yoojin; Zhou, Quanlin; Birkholzer, Jens T.
2014-04-01
This study presents improved understanding of sensitivity analysis methods through a comparison of the local sensitivity and two global sensitivity analysis methods: the Morris and Sobol‧/Saltelli methods. We re-interpret the variance-based sensitivity indices from the Sobol‧/Saltelli method as difference-based measures. It suggests that the difference-based local and Morris methods provide the effect of each parameter including its interaction with others, similar to the total sensitivity index from the Sobol‧/Saltelli method. We also develop an alternative approximation method to efficiently compute the Sobol‧ index, using one-dimensional fitting of system responses from a Monte-Carlo simulation. For illustration, we conduct a sensitivity analysis of pressure propagation induced by fluid injection and leakage in a reservoir-aquitard-aquifer system. The results show that the three methods provide consistent parameter importance rankings in this system. Our study also reveals that the three methods can provide additional information to improve system understanding.
NASA Astrophysics Data System (ADS)
Tang, Kunkun; Congedo, Pietro M.; Abgrall, Rémi
2016-06-01
The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.
Sensitivity Analysis in Engineering
NASA Technical Reports Server (NTRS)
Adelman, Howard M. (Compiler); Haftka, Raphael T. (Compiler)
1987-01-01
The symposium proceedings presented focused primarily on sensitivity analysis of structural response. However, the first session, entitled, General and Multidisciplinary Sensitivity, focused on areas such as physics, chemistry, controls, and aerodynamics. The other four sessions were concerned with the sensitivity of structural systems modeled by finite elements. Session 2 dealt with Static Sensitivity Analysis and Applications; Session 3 with Eigenproblem Sensitivity Methods; Session 4 with Transient Sensitivity Analysis; and Session 5 with Shape Sensitivity Analysis.
NASA Astrophysics Data System (ADS)
Khorashadi Zadeh, Farkhondeh; Sarrazin, Fanny; Nossent, Jiri; Pianosi, Francesca; van Griensven, Ann; Wagener, Thorsten; Bauwens, Willy
2015-04-01
Uncertainty in parameters is a well-known reason of model output uncertainty which, undermines model reliability and restricts model application. A large number of parameters, in addition to the lack of data, limits calibration efficiency and also leads to higher parameter uncertainty. Global Sensitivity Analysis (GSA) is a set of mathematical techniques that provides quantitative information about the contribution of different sources of uncertainties (e.g. model parameters) to the model output uncertainty. Therefore, identifying influential and non-influential parameters using GSA can improve model calibration efficiency and consequently reduce model uncertainty. In this paper, moment-independent density-based GSA methods that consider the entire model output distribution - i.e. Probability Density Function (PDF) or Cumulative Distribution Function (CDF) - are compared with the widely-used variance-based method and their differences are discussed. Moreover, the effect of model output definition on parameter ranking results is investigated using Nash-Sutcliffe Efficiency (NSE) and model bias as example outputs. To this end, 26 flow parameters of a SWAT model of the River Zenne (Belgium) are analysed. In order to assess the robustness of the sensitivity indices, bootstrapping is applied and 95% confidence intervals are estimated. The results show that, although the variance-based method is easy to implement and interpret, it provides wider confidence intervals, especially for non-influential parameters, compared to the density-based methods. Therefore, density-based methods may be a useful complement to variance-based methods for identifying non-influential parameters.
NASA Astrophysics Data System (ADS)
Pianosi, Francesca; Wagener, Thorsten
2016-04-01
Simulations from environmental models are affected by potentially large uncertainties stemming from various sources, including model parameters and observational uncertainty in the input/output data. Understanding the relative importance of such sources of uncertainty is essential to support model calibration, validation and diagnostic evaluation, and to prioritize efforts for uncertainty reduction. Global Sensitivity Analysis (GSA) provides the theoretical framework and the numerical tools to gain this understanding. However, in traditional applications of GSA, model outputs are an aggregation of the full set of simulated variables. This aggregation of propagated uncertainties prior to GSA may lead to a significant loss of information and may cover up local behaviour that could be of great interest. In this work, we propose a time-varying version of a recently developed density-based GSA method, called PAWN, as a viable option to reduce this loss of information. We apply our approach to a medium-complexity hydrological model in order to address two questions: [1] Can we distinguish between the relative importance of parameter uncertainty versus data uncertainty in time? [2] Do these influences change in catchments with different characteristics? The results present the first quantitative investigation on the relative importance of parameter and data uncertainty across time. They also provide a demonstration of the value of time-varying GSA to investigate the propagation of uncertainty through numerical models and therefore guide additional data collection needs and model calibration/assessment.
NASA Astrophysics Data System (ADS)
Le Cozannet, Gonéri; Oliveros, Carlos; Castelle, Bruno; Garcin, Manuel; Idier, Déborah; Pedreros, Rodrigo; Rohmer, Jeremy
2016-04-01
Future sandy shoreline changes are often assed by summing the contributions of longshore and cross-shore effects. In such approaches, a contribution of sea-level rise can be incorporated by adding a supplementary term based on the Bruun rule. Here, our objective is to identify where and when the use of the Bruun rule can be (in)validated, in the case of wave-exposed beaches with gentle slopes. We first provide shoreline change scenarios that account for all uncertain hydrosedimentary processes affecting the idealized low- and high-energy coasts described by Stive (2004)[Stive, M. J. F. 2004, How important is global warming for coastal erosion? an editorial comment, Climatic Change, vol. 64, n 12, doi:10.1023/B:CLIM.0000024785.91858. ISSN 0165-0009]. Then, we generate shoreline change scenarios based on probabilistic sea-level rise projections based on IPCC. For scenario RCP 6.0 and 8.5 and in the absence of coastal defenses, the model predicts an observable shift toward generalized beach erosion by the middle of the 21st century. On the contrary, the model predictions are unlikely to differ from the current situation in case of scenario RCP 2.6. To get insight into the relative importance of each source of uncertainties, we quantify each contributions to the variance of the model outcome using a global sensitivity analysis. This analysis shows that by the end of the 21st century, a large part of shoreline change uncertainties are due to the climate change scenario if all anthropogenic greenhousegas emission scenarios are considered equiprobable. To conclude, the analysis shows that under the assumptions above, (in)validating the Bruun rule should be straightforward during the second half of the 21st century and for the RCP 8.5 scenario. Conversely, for RCP 2.6, the noise in shoreline change evolution should continue dominating the signal due to the Bruun effect. This last conclusion can be interpreted as an important potential benefit of climate change mitigation.
NASA Astrophysics Data System (ADS)
Bounceur, N.; Crucifix, M.; Wilkinson, R. D.
2015-05-01
A global sensitivity analysis is performed to describe the effects of astronomical forcing on the climate-vegetation system simulated by the model of intermediate complexity LOVECLIM in interglacial conditions. The methodology relies on the estimation of sensitivity measures, using a Gaussian process emulator as a fast surrogate of the climate model, calibrated on a set of well-chosen experiments. The outputs considered are the annual mean temperature and precipitation and the growing degree days (GDD). The experiments were run on two distinct land surface schemes to estimate the importance of vegetation feedbacks on climate variance. This analysis provides a spatial description of the variance due to the factors and their combinations, in the form of "fingerprints" obtained from the covariance indices. The results are broadly consistent with the current under-standing of Earth's climate response to the astronomical forcing. In particular, precession and obliquity are found to contribute in LOVECLIM equally to GDD in the Northern Hemisphere, and the effect of obliquity on the response of Southern Hemisphere temperature dominates precession effects. Precession dominates precipitation changes in subtropical areas. Compared to standard approaches based on a small number of simulations, the methodology presented here allows us to identify more systematically regions susceptible to experiencing rapid climate change in response to the smooth astronomical forcing change. In particular, we find that using interactive vegetation significantly enhances the expected rates of climate change, specifically in the Sahel (up to 50% precipitation change in 1000 years) and in the Canadian Arctic region (up to 3° in 1000 years). None of the tested astronomical configurations were found to induce multiple steady states, but, at low obliquity, we observed the development of an oscillatory pattern that has already been reported in LOVECLIM. Although the mathematics of the analysis are
Qian, Yun; Yan, Huiping; Hou, Zhangshuan; Johannesson, G.; Klein, Stephen A.; Lucas, Donald; Neale, Richard; Rasch, Philip J.; Swiler, Laura P.; Tannahill, John; Wang, Hailong; Wang, Minghuai; Zhao, Chun
2015-04-10
We investigate the sensitivity of precipitation characteristics (mean, extreme and diurnal cycle) to a set of uncertain parameters that influence the qualitative and quantitative behavior of the cloud and aerosol processes in the Community Atmosphere Model (CAM5). We adopt both the Latin hypercube and quasi-Monte Carlo sampling approaches to effectively explore the high-dimensional parameter space and then conduct two large sets of simulations. One set consists of 1100 simulations (cloud ensemble) perturbing 22 parameters related to cloud physics and convection, and the other set consists of 256 simulations (aerosol ensemble) focusing on 16 parameters related to aerosols and cloud microphysics. Results show that for the 22 parameters perturbed in the cloud ensemble, the six having the greatest influences on the global mean precipitation are identified, three of which (related to the deep convection scheme) are the primary contributors to the total variance of the phase and amplitude of the precipitation diurnal cycle over land. The extreme precipitation characteristics are sensitive to a fewer number of parameters. The precipitation does not always respond monotonically to parameter change. The influence of individual parameters does not depend on the sampling approaches or concomitant parameters selected. Generally the GLM is able to explain more of the parametric sensitivity of global precipitation than local or regional features. The total explained variance for precipitation is primarily due to contributions from the individual parameters (75-90% in total). The total variance shows a significant seasonal variability in the mid-latitude continental regions, but very small in tropical continental regions.
Qian, Yun; Yan, Huiping; Hou, Zhangshuan; Johannesson, G.; Klein, Stephen A.; Lucas, Donald; Neale, Richard; Rasch, Philip J.; Swiler, Laura P.; Tannahill, John; et al
2015-04-10
We investigate the sensitivity of precipitation characteristics (mean, extreme and diurnal cycle) to a set of uncertain parameters that influence the qualitative and quantitative behavior of the cloud and aerosol processes in the Community Atmosphere Model (CAM5). We adopt both the Latin hypercube and quasi-Monte Carlo sampling approaches to effectively explore the high-dimensional parameter space and then conduct two large sets of simulations. One set consists of 1100 simulations (cloud ensemble) perturbing 22 parameters related to cloud physics and convection, and the other set consists of 256 simulations (aerosol ensemble) focusing on 16 parameters related to aerosols and cloud microphysics.more » Results show that for the 22 parameters perturbed in the cloud ensemble, the six having the greatest influences on the global mean precipitation are identified, three of which (related to the deep convection scheme) are the primary contributors to the total variance of the phase and amplitude of the precipitation diurnal cycle over land. The extreme precipitation characteristics are sensitive to a fewer number of parameters. The precipitation does not always respond monotonically to parameter change. The influence of individual parameters does not depend on the sampling approaches or concomitant parameters selected. Generally the GLM is able to explain more of the parametric sensitivity of global precipitation than local or regional features. The total explained variance for precipitation is primarily due to contributions from the individual parameters (75-90% in total). The total variance shows a significant seasonal variability in the mid-latitude continental regions, but very small in tropical continental regions.« less
NASA Astrophysics Data System (ADS)
Coutu, S.
2014-12-01
A sensitivity analysis was conducted on an existing parsimonious model aiming to reproduce flow in engineered urban catchments and sewer networks. The model is characterized by his parsimonious feature and is limited to seven calibration parameters. The objective of this study is to demonstrate how different levels of sensitivity analysis can have an influence on the interpretation of input parameter relevance in urban hydrology, even for light structure models. In this perspective, we applied a one-at-a-time (OAT) sensitivity analysis (SA) as well as a variance-based global and model independent method; the calculation of Sobol indexes. Sobol's first and total effect indexes were estimated using a Monte-Carlo approach. We present evidences of the irrelevance of calculating Sobol's second order indexes when uncertainty on index estimation is too high. Sobol's method results showed that two parameters drive model performance: the subsurface discharge rate and the root zone drainage coefficient (Clapp exponent). Interestingly, the surface discharge rate responsible flow in impervious area has no significant relevance, contrarily to what was expected considering only the one-at-a-time sensitivity analysis. This last statement is clearly not straightforward. It highlights the utility of carrying variance-based sensitivity analysis in the domain of urban hydrology, even when using a parsimonious model, in order to prevent misunderstandings in the system dynamics and consequent management mistakes.
NASA Astrophysics Data System (ADS)
Urrego-Blanco, J. R.; Urban, N. M.; Hunke, E. C.
2015-12-01
Sea ice and climate models are key to understand and predict ongoing changes in the Arctic climate system, particularly sharp reductions in sea ice area and volume. There are, however, uncertainties arising from multiple sources, including parametric uncertainty, which affect model output. The Los Alamos Sea Ice Model (CICE) includes complex parameterizations of sea ice processes with a large number of parameters for which accurate values are still not well established. To enhance the credibility of sea ice predictions, it is necessary to understand the sensitivity of model results to uncertainties in input parameters. In this work we conduct a variance-based global sensitivity analysis of sea ice extent, area, and volume. This approach allows full exploration of our 40-dimensional parametric space, and the model sensitivity is quantified in terms of main and total effects indices. The global sensitivity analysis does not require assumptions of additivity or linearity, implicit in the most commonly used one-at-a-time sensitivity analyses. A Gaussian process emulator of the sea ice model is built and then used to generate the large number of samples necessary to calculate the sensitivity indices, at a much lower computational cost than using the full model. The sensitivity indices are used to rank the most important model parameters affecting Arctic sea ice extent, area, and volume. The most important parameters contributing to the model variance include snow conductivity and grain size, and the time-scale for drainage of melt ponds. Other important parameters include the thickness of the ice radiative scattering layer, ice density, and the ice-ocean drag coefficient. We discuss physical processes that explain variations in simulated sea ice variables in terms of the first order parameter effects and the most important interactions among them.
NASA Astrophysics Data System (ADS)
Harp, D.; Vesselinov, V. V.
2011-12-01
A newly developed methodology to model-based decision analysis is presented. The methodology incorporates a sampling approach, referred to as Agent-Based Analysis of Global Uncertainty and Sensitivity (ABAGUS; Harp & Vesselinov; 2011), that efficiently collects sets of acceptable solutions (i.e. acceptable model parameter sets) for different levels of a model performance metric representing the consistency of model predictions to observations. In this case, the performance metric is based on model residuals (i.e. discrepancies between observations and simulations). ABAGUS collects acceptable solutions from a discretized parameter space and stores them in a KD-tree for efficient retrieval. The parameter space domain (parameter minimum/maximum ranges) and discretization are predefined. On subsequent visits to collected locations, agents are provided with a modified value of the performance metric, and the model solution is not recalculated. The modified values of the performance metric sculpt the response surface (convexities become concavities), repulsing agents from collected regions. This promotes global exploration of the parameter space and discourages reinvestigation of regions of previously collected acceptable solutions. The resulting sets of acceptable solutions are formulated into a decision analysis using concepts from info-gap theory (Ben-Haim, 2006). Using info-gap theory, the decision robustness and opportuneness are quantified, providing measures of the immunity to failure and windfall, respectively, of alternative decisions. The approach is intended for cases where the information is extremely limited, resulting in non-probabilistic uncertainties concerning model properties such as boundary and initial conditions, model parameters, conceptual model elements, etc. The information provided by this analysis is weaker than the information provided by probabilistic decision analyses (i.e. posterior parameter distributions are not produced), however, this
Sánchez-Canales, M; López-Benito, A; Acuña, V; Ziv, G; Hamel, P; Chaplin-Kramer, R; Elorza, F J
2015-01-01
Climate change and land-use change are major factors influencing sediment dynamics. Models can be used to better understand sediment production and retention by the landscape, although their interpretation is limited by large uncertainties, including model parameter uncertainties. The uncertainties related to parameter selection may be significant and need to be quantified to improve model interpretation for watershed management. In this study, we performed a sensitivity analysis of the InVEST (Integrated Valuation of Environmental Services and Tradeoffs) sediment retention model in order to determine which model parameters had the greatest influence on model outputs, and therefore require special attention during calibration. The estimation of the sediment loads in this model is based on the Universal Soil Loss Equation (USLE). The sensitivity analysis was performed in the Llobregat basin (NE Iberian Peninsula) for exported and retained sediment, which support two different ecosystem service benefits (avoided reservoir sedimentation and improved water quality). Our analysis identified the model parameters related to the natural environment as the most influential for sediment export and retention. Accordingly, small changes in variables such as the magnitude and frequency of extreme rainfall events could cause major changes in sediment dynamics, demonstrating the sensitivity of these dynamics to climate change in Mediterranean basins. Parameters directly related to human activities and decisions (such as cover management factor, C) were also influential, especially for sediment exported. The importance of these human-related parameters in the sediment export process suggests that mitigation measures have the potential to at least partially ameliorate climate-change driven changes in sediment exportation. PMID:25302447
Casadebaig, Pierre; Zheng, Bangyou; Chapman, Scott; Huth, Neil; Faivre, Robert; Chenu, Karine
2016-01-01
A crop can be viewed as a complex system with outputs (e.g. yield) that are affected by inputs of genetic, physiology, pedo-climatic and management information. Application of numerical methods for model exploration assist in evaluating the major most influential inputs, providing the simulation model is a credible description of the biological system. A sensitivity analysis was used to assess the simulated impact on yield of a suite of traits involved in major processes of crop growth and development, and to evaluate how the simulated value of such traits varies across environments and in relation to other traits (which can be interpreted as a virtual change in genetic background). The study focused on wheat in Australia, with an emphasis on adaptation to low rainfall conditions. A large set of traits (90) was evaluated in a wide target population of environments (4 sites × 125 years), management practices (3 sowing dates × 3 nitrogen fertilization levels) and CO2 (2 levels). The Morris sensitivity analysis method was used to sample the parameter space and reduce computational requirements, while maintaining a realistic representation of the targeted trait × environment × management landscape (∼ 82 million individual simulations in total). The patterns of parameter × environment × management interactions were investigated for the most influential parameters, considering a potential genetic range of +/- 20% compared to a reference cultivar. Main (i.e. linear) and interaction (i.e. non-linear and interaction) sensitivity indices calculated for most of APSIM-Wheat parameters allowed the identification of 42 parameters substantially impacting yield in most target environments. Among these, a subset of parameters related to phenology, resource acquisition, resource use efficiency and biomass allocation were identified as potential candidates for crop (and model) improvement. PMID:26799483
Casadebaig, Pierre; Zheng, Bangyou; Chapman, Scott; Huth, Neil; Faivre, Robert; Chenu, Karine
2016-01-01
A crop can be viewed as a complex system with outputs (e.g. yield) that are affected by inputs of genetic, physiology, pedo-climatic and management information. Application of numerical methods for model exploration assist in evaluating the major most influential inputs, providing the simulation model is a credible description of the biological system. A sensitivity analysis was used to assess the simulated impact on yield of a suite of traits involved in major processes of crop growth and development, and to evaluate how the simulated value of such traits varies across environments and in relation to other traits (which can be interpreted as a virtual change in genetic background). The study focused on wheat in Australia, with an emphasis on adaptation to low rainfall conditions. A large set of traits (90) was evaluated in a wide target population of environments (4 sites × 125 years), management practices (3 sowing dates × 3 nitrogen fertilization levels) and CO2 (2 levels). The Morris sensitivity analysis method was used to sample the parameter space and reduce computational requirements, while maintaining a realistic representation of the targeted trait × environment × management landscape (∼ 82 million individual simulations in total). The patterns of parameter × environment × management interactions were investigated for the most influential parameters, considering a potential genetic range of +/- 20% compared to a reference cultivar. Main (i.e. linear) and interaction (i.e. non-linear and interaction) sensitivity indices calculated for most of APSIM-Wheat parameters allowed the identification of 42 parameters substantially impacting yield in most target environments. Among these, a subset of parameters related to phenology, resource acquisition, resource use efficiency and biomass allocation were identified as potential candidates for crop (and model) improvement. PMID:26799483
NASA Astrophysics Data System (ADS)
Chang, S. J.; Graham, W. D.; Hwang, S.
2014-12-01
Projecting evapotranspiration for estimating future agricultural irrigation demand is uncertain because estimates of future precipitation and evapotranspiration vary significantly depending on the Global Climate Model (GCM), future RCP emission scenario and reference evapotranspiration (RET) estimation method selected. Understanding the relative contributions of these various sources of uncertainty is important for effective long-term water resource planning. In this study variance-based sensitivity analysis (Saltelli et al., 2010) was used to assess the sensitivity of estimated future changes in precipitation, RET and Standardized Precipitation Evapotranspiration Index drought index (SPEI) to 9 GCMs, 3 RCP scenarios, and 11 ET estimation methods over 9 regions of the United States for two future periods: 2030-2060 and 2070-2100. Future changes in precipitation were found to be most sensitive to GCM selection for all U.S. regions and both future periods. For projecting changes in future RET and SPEI, the selection of ET method and GCM were more sensitive than the selection of RCP scenario. In general, changes in ET and SPEI were most sensitive to ET estimation methods in the cold season and to GCM selection in the warm season; however sensitivities differed by region, season and future period. This study underscores the importance of evaluating projections of future agricultural irrigation demand for an ensemble of GCMs and ET estimation methods rather than relying on few GCMs and a single ET estimation method.
Cosmopolitan Sensitivities, Vulnerability, and Global Englishes
ERIC Educational Resources Information Center
Jacobsen, Ushma Chauhan
2015-01-01
This paper is the outcome of an afterthought that assembles connections between three elements: the ambitions of cultivating cosmopolitan sensitivities that circulate vibrantly in connection with the internationalization of higher education, a course on Global Englishes at a Danish university and the sensation of vulnerability. It discusses the…
1992-02-20
SENSIT,MUSIG,COMSEN is a set of three related programs for sensitivity test analysis. SENSIT conducts sensitivity tests. These tests are also known as threshold tests, LD50 tests, gap tests, drop weight tests, etc. SENSIT interactively instructs the experimenter on the proper level at which to stress the next specimen, based on the results of previous responses. MUSIG analyzes the results of a sensitivity test to determine the mean and standard deviation of the underlying population bymore » computing maximum likelihood estimates of these parameters. MUSIG also computes likelihood ratio joint confidence regions and individual confidence intervals. COMSEN compares the results of two sensitivity tests to see if the underlying populations are significantly different. COMSEN provides an unbiased method of distinguishing between statistical variation of the estimates of the parameters of the population and true population difference.« less
LISA Telescope Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Waluschka, Eugene; Krebs, Carolyn (Technical Monitor)
2001-01-01
The results of a LISA telescope sensitivity analysis will be presented, The emphasis will be on the outgoing beam of the Dall-Kirkham' telescope and its far field phase patterns. The computed sensitivity analysis will include motions of the secondary with respect to the primary, changes in shape of the primary and secondary, effect of aberrations of the input laser beam and the effect the telescope thin film coatings on polarization. An end-to-end optical model will also be discussed.
Global Sensitivity Measures from Given Data
Elmar Plischke; Emanuele Borgonovo; Curtis L. Smith
2013-05-01
Simulation models support managers in the solution of complex problems. International agencies recommend uncertainty and global sensitivity methods as best practice in the audit, validation and application of scientific codes. However, numerical complexity, especially in the presence of a high number of factors, induces analysts to employ less informative but numerically cheaper methods. This work introduces a design for estimating global sensitivity indices from given data (including simulation input–output data), at the minimum computational cost. We address the problem starting with a statistic based on the L1-norm. A formal definition of the estimators is provided and corresponding consistency theorems are proved. The determination of confidence intervals through a bias-reducing bootstrap estimator is investigated. The strategy is applied in the identification of the key drivers of uncertainty for the complex computer code developed at the National Aeronautics and Space Administration (NASA) assessing the risk of lunar space missions. We also introduce a symmetry result that enables the estimation of global sensitivity measures to datasets produced outside a conventional input–output functional framework.
Sensitivity Analysis Without Assumptions
VanderWeele, Tyler J.
2016-01-01
Unmeasured confounding may undermine the validity of causal inference with observational studies. Sensitivity analysis provides an attractive way to partially circumvent this issue by assessing the potential influence of unmeasured confounding on causal conclusions. However, previous sensitivity analysis approaches often make strong and untestable assumptions such as having an unmeasured confounder that is binary, or having no interaction between the effects of the exposure and the confounder on the outcome, or having only one unmeasured confounder. Without imposing any assumptions on the unmeasured confounder or confounders, we derive a bounding factor and a sharp inequality such that the sensitivity analysis parameters must satisfy the inequality if an unmeasured confounder is to explain away the observed effect estimate or reduce it to a particular level. Our approach is easy to implement and involves only two sensitivity parameters. Surprisingly, our bounding factor, which makes no simplifying assumptions, is no more conservative than a number of previous sensitivity analysis techniques that do make assumptions. Our new bounding factor implies not only the traditional Cornfield conditions that both the relative risk of the exposure on the confounder and that of the confounder on the outcome must satisfy but also a high threshold that the maximum of these relative risks must satisfy. Furthermore, this new bounding factor can be viewed as a measure of the strength of confounding between the exposure and the outcome induced by a confounder. PMID:26841057
Arbitrary-resolution global sensitivity kernels
NASA Astrophysics Data System (ADS)
Nissen-Meyer, T.; Fournier, A.; Dahlen, F.
2007-12-01
Extracting observables out of any part of a seismogram (e.g. including diffracted phases such as Pdiff) necessitates the knowledge of 3-D time-space wavefields for the Green functions that form the backbone of Fréchet sensitivity kernels. While known for a while, this idea is still computationally intractable in 3-D, facing major simulation and storage issues when high-frequency wavefields are considered at the global scale. We recently developed a new "collapsed-dimension" spectral-element method that solves the 3-D system of elastodynamic equations in a 2-D space, based on exploring symmetry considerations of the seismic-wave radiation patterns. We will present the technical background on the computation of waveform kernels, various examples of time- and frequency-dependent sensitivity kernels and subsequently extracted time-window kernels (e.g. banana- doughnuts). Given the computationally light-weighted 2-D nature, we will explore some crucial parameters such as excitation type, source time functions, frequency, azimuth, discontinuity locations, and phase type, i.e. an a priori view into how, when, and where seismograms carry 3-D Earth signature. A once-and-for-all database of 2-D waveforms for various source depths shall then serve as a complete set of global time-space sensitivity for a given spherically symmetric background model, thereby allowing for tomographic inversions with arbitrary frequencies, observables, and phases.
Witholder, R.E.
1980-04-01
The Solar Energy Research Institute has conducted a limited sensitivity analysis on a System for Projecting the Utilization of Renewable Resources (SPURR). The study utilized the Domestic Policy Review scenario for SPURR agricultural and industrial process heat and utility market sectors. This sensitivity analysis determines whether variations in solar system capital cost, operation and maintenance cost, and fuel cost (biomass only) correlate with intuitive expectations. The results of this effort contribute to a much larger issue: validation of SPURR. Such a study has practical applications for engineering improvements in solar technologies and is useful as a planning tool in the R and D allocation process.
NASA Astrophysics Data System (ADS)
Massmann, C.; Holzmann, H.
2012-12-01
SummaryThe effect of 11 parameters on the discharge of a conceptual rainfall-runoff model was analyzed for a small Austrian catchment. The sensitivities were computed using three methods: Sobol's indices, the mutual entropy and regional sensitivity analysis (RSA). The calculations were carried out for different temporal scales of evaluation ranging from daily to a multiannual period. A comparison of the methods shows that the mutual entropy and the RSA methods give more robust results than Sobol's method, which shows a higher variability in the sensitivities when they are calculated using different data sets. While all sensitivity methods are suitable for identifying the most sensitive parameters of a model, there are increasing differences in the results when the parameters become less important and also when shorter temporal scales are considered. A correlation analysis further indicated that the periods in which the parameter sensitivity rankings did not agree between the different methods are characterized by a higher impact of the parameters interactions on the modeled discharge. An analysis of the parameter sensitivity across the scales showed that the number of important parameter decreases when longer evaluation periods are considered. For instance, it was observed that all parameters were important at least during 1 day a daily scale, while at a yearly scale only the parameters characterizing the soil storage and the recession constants for interflow and percolation had high sensitivities. With respect to the impact of the interactions between parameters on the model results, it was observed that the largest effect is related to the parameters describing the size of the soil storage, the interflow and the percolation flow recession constants. Further, it was observed that there is a positive correlation between the importance of the interactions and the measured discharge. While the study focuses on quantitative sensitivity measures, it is also highlighted
NASA Astrophysics Data System (ADS)
Vilain, Guillaume; Müller, Christoph; Schaphoff, Sibyll; Lotze-Campen, Hermann; Feulner, Georg
2013-04-01
Nitrogen (N) cycling affects carbon uptake by the terrestrial biosphere and imposes controls on carbon cycle response to variation in temperature and precipitation. In the absence of carbon-nitrogen interactions, surface warming significantly reduces carbon sequestration in both vegetation and soil by increasing respiration and decomposition (a positive feedback). If plant carbon uptake, however, is assumed to be nitrogen limited, an increase in decomposition leads to an increase in nitrogen availability stimulating plant growth. The resulting increase in carbon uptake by vegetation can exceed carbon loss from the soil, leading to enhanced carbon sequestration (a negative feedback). Cultivation of biofuel crops is expanding because of its potential for climate mitigation, whereas the environmental impacts of bioenergy production still remain unknown. While carbon payback times are being increasingly investigated, non-CO2 greenhouse gas emissions of bioenergy production have received little attention so far. We introduced a process-based nitrogen cycle to the LPJmL model at the global scale (each grid cell being 0.5° latitude by 0.5° longitude in size). The model captures mechanisms essential for N cycling and their feedbacks on C cycling: the uptake, allocation and turnover on N in plants, N limitation of plant productivity, and soil N transformation including mineralization, N2 fixation, nitrification and denitrification, NH3 volatilization, N leaching and N2O emissions. Our model captures many essential characteristics of C-N interactions and is capable of broadly recreating spatial and temporal variations in N and C dynamics. Here we evaluate LPJmL by comparing the predicted variables with data from sites with sufficient observations to describe ecosystem nitrogen and carbon fluxes and contents and their responses to climate as well as with estimates of N-dynamics at the global scale. The simulations presented here use no site-specific parameterizations in
RESRAD parameter sensitivity analysis
Cheng, J.J.; Yu, C.; Zielen, A.J.
1991-08-01
Three methods were used to perform a sensitivity analysis of RESRAD code input parameters -- enhancement of RESRAD by the Gradient Enhanced Software System (GRESS) package, direct parameter perturbation, and graphic comparison. Evaluation of these methods indicated that (1) the enhancement of RESRAD by GRESS has limitations and should be used cautiously, (2) direct parameter perturbation is tedious to implement, and (3) the graphics capability of RESRAD 4.0 is the most direct and convenient method for performing sensitivity analyses. This report describes procedures for implementing these methods and presents a comparison of results. 3 refs., 9 figs., 8 tabs.
NASA Astrophysics Data System (ADS)
Ishida, Hiroyuki; Kobayashi, Shota; Kanae, Shinjiro; Hasegawa, Tomoko; Fujimori, Shinichiro; Shin, Yonghee; Takahashi, Kiyoshi; Masui, Toshihiko; Tanaka, Akemi; Honda, Yasushi
2014-05-01
This study assessed the health burden attributable to childhood underweight through 2050 focusing on disability-adjusted life years (DALYs), by considering the latest scenarios for climate change studies (representative concentration pathways and shared socioeconomic pathways (SSPs)) and conducting sensitivity analysis. A regression model for estimating DALYs attributable to childhood underweight (DAtU) was developed using the relationship between DAtU and childhood stunting. We combined a global computable general equilibrium model, a crop model, and two regression models to assess the future health burden. We found that (i) world total DAtU decreases from 2005 by 28 ˜ 63% in 2050 depending on the socioeconomic scenarios. Per capita DAtU also decreases in all regions under either scenario in 2050, but the decreases vary significantly by regions and scenarios. (ii) The impact of climate change is relatively small in the framework of this study but, on the other hand, socioeconomic conditions have a great impact on the future health burden. (iii) Parameter uncertainty of the regression models is the second largest factor on uncertainty of the result following the changes in socioeconomic condition, and uncertainty derived from the difference in global circulation models is the smallest in the framework of this study.
Scaling in sensitivity analysis
Link, W.A.; Doherty, P.F., Jr.
2002-01-01
Population matrix models allow sets of demographic parameters to be summarized by a single value 8, the finite rate of population increase. The consequences of change in individual demographic parameters are naturally measured by the corresponding changes in 8; sensitivity analyses compare demographic parameters on the basis of these changes. These comparisons are complicated by issues of scale. Elasticity analysis attempts to deal with issues of scale by comparing the effects of proportional changes in demographic parameters, but leads to inconsistencies in evaluating demographic rates. We discuss this and other problems of scaling in sensitivity analysis, and suggest a simple criterion for choosing appropriate scales. We apply our suggestions to data for the killer whale, Orcinus orca.
LISA Telescope Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Waluschka, Eugene; Krebs, Carolyn (Technical Monitor)
2002-01-01
The Laser Interferometer Space Antenna (LISA) for the detection of Gravitational Waves is a very long baseline interferometer which will measure the changes in the distance of a five million kilometer arm to picometer accuracies. As with any optical system, even one with such very large separations between the transmitting and receiving, telescopes, a sensitivity analysis should be performed to see how, in this case, the far field phase varies when the telescope parameters change as a result of small temperature changes.
Sensitivity analysis of a wing aeroelastic response
NASA Technical Reports Server (NTRS)
Kapania, Rakesh K.; Eldred, Lloyd B.; Barthelemy, Jean-Francois M.
1991-01-01
A variation of Sobieski's Global Sensitivity Equations (GSE) approach is implemented to obtain the sensitivity of the static aeroelastic response of a three-dimensional wing model. The formulation is quite general and accepts any aerodynamics and structural analysis capability. An interface code is written to convert one analysis's output to the other's input, and visa versa. Local sensitivity derivatives are calculated by either analytic methods or finite difference techniques. A program to combine the local sensitivities, such as the sensitivity of the stiffness matrix or the aerodynamic kernel matrix, into global sensitivity derivatives is developed. The aerodynamic analysis package FAST, using a lifting surface theory, and a structural package, ELAPS, implementing Giles' equivalent plate model are used.
Loizou, George D; McNally, Kevin; Jones, Kate; Cocker, John
2015-01-01
Global sensitivity analysis (SA) was used during the development phase of a binary chemical physiologically based pharmacokinetic (PBPK) model used for the analysis of m-xylene and ethanol co-exposure in humans. SA was used to identify those parameters which had the most significant impact on variability of venous blood and exhaled m-xylene and urinary excretion of the major metabolite of m-xylene metabolism, 3-methyl hippuric acid. This analysis informed the selection of parameters for estimation/calibration by fitting to measured biological monitoring (BM) data in a Bayesian framework using Markov chain Monte Carlo (MCMC) simulation. Data generated in controlled human studies were shown to be useful for investigating the structure and quantitative outputs of PBPK models as well as the biological plausibility and variability of parameters for which measured values were not available. This approach ensured that a priori knowledge in the form of prior distributions was ascribed only to those parameters that were identified as having the greatest impact on variability. This is an efficient approach which helps reduce computational cost. PMID:26175688
Loizou, George D.; McNally, Kevin; Jones, Kate; Cocker, John
2015-01-01
Global sensitivity analysis (SA) was used during the development phase of a binary chemical physiologically based pharmacokinetic (PBPK) model used for the analysis of m-xylene and ethanol co-exposure in humans. SA was used to identify those parameters which had the most significant impact on variability of venous blood and exhaled m-xylene and urinary excretion of the major metabolite of m-xylene metabolism, 3-methyl hippuric acid. This analysis informed the selection of parameters for estimation/calibration by fitting to measured biological monitoring (BM) data in a Bayesian framework using Markov chain Monte Carlo (MCMC) simulation. Data generated in controlled human studies were shown to be useful for investigating the structure and quantitative outputs of PBPK models as well as the biological plausibility and variability of parameters for which measured values were not available. This approach ensured that a priori knowledge in the form of prior distributions was ascribed only to those parameters that were identified as having the greatest impact on variability. This is an efficient approach which helps reduce computational cost. PMID:26175688
Kim, Nam-Soo; Im, Min-Ji; Nkongolo, Kabwe
2016-08-01
Red maple (Acer rubum), a common deciduous tree species in Northern Ontario, has shown resistance to soil metal contamination. Previous reports have indicated that this plant does not accumulate metals in its tissue. However, low level of nickel and copper corresponding to the bioavailable levels in contaminated soils in Northern Ontario causes severe physiological damages. No differentiation between metal-contaminated and uncontaminated populations has been reported based on genetic analyses. The main objective of this study was to assess whether DNA methylation is involved in A. rubrum adaptation to soil metal contamination. Global cytosine and methylation-sensitive amplified polymorphism (MSAP) analyses were carried out in A. rubrum populations from metal-contaminated and uncontaminated sites. The global modified cytosine ratios in genomic DNA revealed a significant decrease in cytosine methylation in genotypes from a metal-contaminated site compared to uncontaminated populations. Other genotypes from a different metal-contaminated site within the same region appear to be recalcitrant to metal-induced DNA alterations even ≥30 years of tree life exposure to nickel and copper. MSAP analysis showed a high level of polymorphisms in both uncontaminated (77%) and metal-contaminated (72%) populations. Overall, 205 CCGG loci were identified in which 127 were methylated in either outer or inner cytosine. No differentiation among populations was established based on several genetic parameters tested. The variations for nonmethylated and methylated loci were compared by analysis of molecular variance (AMOVA). For methylated loci, molecular variance among and within populations was 1.5% and 13.2%, respectively. These values were low (0.6% for among populations and 5.8% for within populations) for unmethylated loci. Metal contamination is seen to affect methylation of cytosine residues in CCGG motifs in the A. rubrum populations that were analyzed. PMID:27547351
Sensitivity testing and analysis
Neyer, B.T.
1991-01-01
New methods of sensitivity testing and analysis are proposed. The new test method utilizes Maximum Likelihood Estimates to pick the next test level in order to maximize knowledge of both the mean, {mu}, and the standard deviation, {sigma} of the population. Simulation results demonstrate that this new test provides better estimators (less bias and smaller variance) of both {mu} and {sigma} than the other commonly used tests (Probit, Bruceton, Robbins-Monro, Langlie). A new method of analyzing sensitivity tests is also proposed. It uses the Likelihood Ratio Test to compute regions of arbitrary confidence. It can calculate confidence regions, for {mu}, {sigma}, and arbitrary percentiles. Unlike presently used methods, such as the program ASENT which is based on the Cramer-Rao theorem, it can analyze the results of all sensitivity tests, and it does not significantly underestimate the size of the confidence regions. The new test and analysis methods will be explained and compared to the presently used methods. 19 refs., 12 figs.
NASA Technical Reports Server (NTRS)
Fu, L. L.; Chao, Y.
1997-01-01
Investigated in this study is the response of a global ocean general circulation model to forcing provided by two wind products: operational analysis from the National Center for Environmental Prediction (NCEP); observations made by the ERS-1 radar scatterometer.
NASA Astrophysics Data System (ADS)
Malaguerra, Flavio; Albrechtsen, Hans-Jørgen; Binning, Philip John
2013-01-01
SummaryA reactive transport model is employed to evaluate the potential for contamination of drinking water wells by surface water pollution. The model considers various geologic settings, includes sorption and degradation processes and is tested by comparison with data from a tracer experiment where fluorescein dye injected in a river is monitored at nearby drinking water wells. Three compounds were considered: an older pesticide MCPP (Mecoprop) which is mobile and relatively persistent, glyphosate (Roundup), a newer biodegradable and strongly sorbing pesticide, and its degradation product AMPA. Global sensitivity analysis using the Morris method is employed to identify the dominant model parameters. Results show that the characteristics of clay aquitards (degree of fracturing and thickness), pollutant properties and well depths are crucial factors when evaluating the risk of drinking water well contamination from surface water. This study suggests that it is unlikely that glyphosate in streams can pose a threat to drinking water wells, while MCPP in surface water can represent a risk: MCPP concentration at the drinking water well can be up to 7% of surface water concentration in confined aquifers and up to 10% in unconfined aquifers. Thus, the presence of confining clay aquitards may not prevent contamination of drinking water wells by persistent compounds in surface water. Results are consistent with data on pesticide occurrence in Denmark where pesticides are found at higher concentrations at shallow depths and close to streams.
NASA Astrophysics Data System (ADS)
Fremier, A. K.; Estrada Carmona, N.; Harper, E.; DeClerck, F.
2011-12-01
Appropriate application of complex models to estimate system behavior requires understanding the influence of model structure and parameter estimates on model output. To date, most researchers perform local sensitivity analyses, rather than global, because of computational time and quantity of data produced. Local sensitivity analyses are limited in quantifying the higher order interactions among parameters, which could lead to incomplete analysis of model behavior. To address this concern, we performed a GSA on a commonly applied equation for soil loss - the Revised Universal Soil Loss Equation. USLE is an empirical model built on plot-scale data from the USA and the Revised version (RUSLE) includes improved equations for wider conditions, with 25 parameters grouped into six factors to estimate long-term plot and watershed scale soil loss. Despite RUSLE's widespread application, a complete sensitivity analysis has yet to be performed. In this research, we applied a GSA to plot and watershed scale data from the US and Costa Rica to parameterize the RUSLE in an effort to understand the relative importance of model factors and parameters across wide environmental space. We analyzed the GSA results using Random Forest, a statistical approach to evaluate parameter importance accounting for the higher order interactions, and used Classification and Regression Trees to show the dominant trends in complex interactions. In all GSA calculations the management of cover crops (C factor) ranks the highest among factors (compared to rain-runoff erosivity, topography, support practices, and soil erodibility). This is counter to previous sensitivity analyses where the topographic factor was determined to be the most important. The GSA finding is consistent across multiple model runs, including data from the US, Costa Rica, and a synthetic dataset of the widest theoretical space. The three most important parameters were: Mass density of live and dead roots found in the upper inch
Brookes, Victoria J.; Jordan, David; Davis, Stephen; Ward, Michael P.; Heller, Jane
2015-01-01
Introduction Strains of Shiga-toxin producing Escherichia coli O157 (STEC O157) are important foodborne pathogens in humans, and outbreaks of illness have been associated with consumption of undercooked beef. Here, we determine the most effective intervention strategies to reduce the prevalence of STEC O157 contaminated beef carcasses using a modelling approach. Method A computational model simulated events and processes in the beef harvest chain. Information from empirical studies was used to parameterise the model. Variance-based global sensitivity analysis (GSA) using the Saltelli method identified variables with the greatest influence on the prevalence of STEC O157 contaminated carcasses. Following a baseline scenario (no interventions), a series of simulations systematically introduced and tested interventions based on influential variables identified by repeated Saltelli GSA, to determine the most effective intervention strategy. Results Transfer of STEC O157 from hide or gastro-intestinal tract to carcass (improved abattoir hygiene) had the greatest influence on the prevalence of contaminated carcases. Due to interactions between inputs (identified by Saltelli GSA), combinations of interventions based on improved abattoir hygiene achieved a greater reduction in maximum prevalence than would be expected from an additive effect of single interventions. The most effective combination was improved abattoir hygiene with vaccination, which achieved a greater than ten-fold decrease in maximum prevalence compared to the baseline scenario. Conclusion Study results suggest that effective interventions to reduce the prevalence of STEC O157 contaminated carcasses should initially be based on improved abattoir hygiene. However, the effect of improved abattoir hygiene on the distribution of STEC O157 concentration on carcasses is an important information gap—further empirical research is required to determine whether reduced prevalence of contaminated carcasses is
Sensitivity of alpine watersheds to global change
NASA Astrophysics Data System (ADS)
Zierl, B.; Bugmann, H.
2003-04-01
Mountains provide society with a wide range of goods and services, so-called mountain ecosystem services. Besides many others, these services include the most precious element for life on earth: fresh water. Global change imposes significant environmental pressure on mountain watersheds. Climate change is predicted to modify water availability as well as shift its seasonality. In fact, the continued capacity of mountain regions to provide fresh water to society is threatened by the impact of environmental and social changes. We use RHESSys (Regional HydroEcological Simulation System) to analyse the impact of climate as well as land use change (e.g. afforestation or deforestation) on hydrological processes in mountain catchments using sophisticated climate and land use scenarios. RHESSys combines distributed flow modelling based on TOPMODEL with an ecophysiological canopy model based on BIOME-BGC and a climate interpolation scheme based on MTCLIM. It is a spatially distributed daily time step model designed to solve the coupled cycles of water, carbon, and nitrogen in mountain catchments. The model is applied to various mountain catchments in the alpine area. Dynamic hydrological and ecological properties such as river discharge, seasonality of discharge, peak flows, snow cover processes, soil moisture, and the feedback of a changing biosphere on hydrology are simulated under current as well as under changed environmental conditions. Results of these studies will be presented and discussed. This project is part of an over overarching EU-project called ATEAM (acronym for Advanced Terrestrial Ecosystem Analysis and Modelling) assessing the vulnerability of European ecosystem services.
Sensitivity analysis in computational aerodynamics
NASA Technical Reports Server (NTRS)
Bristow, D. R.
1984-01-01
Information on sensitivity analysis in computational aerodynamics is given in outline, graphical, and chart form. The prediction accuracy if the MCAERO program, a perturbation analysis method, is discussed. A procedure for calculating perturbation matrix, baseline wing paneling for perturbation analysis test cases and applications of an inviscid sensitivity matrix are among the topics covered.
Mathew, Shibin; Bartels, John; Banerjee, Ipsita; Vodovotz, Yoram
2014-01-01
The precise inflammatory role of the cytokine interleukin (IL)-6 and its utility as a biomarker or therapeutic target have been the source of much debate, presumably due to the complex pro- and anti-inflammatory effects of this cytokine. We previously developed a nonlinear ordinary differential equation (ODE) model to explain the dynamics of endotoxin (lipopolysaccharide; LPS)-induced acute inflammation and associated whole-animal damage/dysfunction (a proxy for the health of the organism), along with the inflammatory mediators tumor necrosis factor (TNF)-α, IL-6, IL-10, and nitric oxide (NO). The model was partially calibrated using data from endotoxemic C57Bl/6 mice. Herein, we investigated the sensitivity of the area under the damage curve (AUCD) to the 51 rate parameters of the ODE model for different levels of simulated LPS challenges using a global sensitivity approach called Random Sampling High Dimensional Model Representation (RS-HDMR). We explored sufficient parametric Monte Carlo samples to generate the variance-based Sobol' global sensitivity indices, and found that inflammatory damage was highly sensitive to the parameters affecting the activity of IL-6 during the different stages of acute inflammation. The AUCIL6 showed a bimodal distribution, with the lower peak representing healthy response and the higher peak representing sustained inflammation. Damage was minimal at low AUCIL6, giving rise to a healthy response. In contrast, intermediate levels of AUCIL6 resulted in high damage, and this was due to the insufficiency of damage recovery driven by anti-inflammatory responses and the activation of positive feedback sustained by IL-6. At high AUCIL6, damage recovery was interestingly restored in some population of simulated animals due to the NO-mediated anti-inflammatory responses. These observations suggest that the host's health status during acute inflammation depends in a nonlinear fashion on the magnitude of the inflammatory stimulus, on the
Extended Forward Sensitivity Analysis for Uncertainty Quantification
Haihua Zhao; Vincent A. Mousseau
2013-01-01
This paper presents the extended forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical parameters, the simulation is allowed to run at optimized time and space steps without affecting the confidence of the physical parameter sensitivity results. The time and space steps forward sensitivity analysis method can also replace the traditional time step and grid convergence study with much less computational cost. Two well-defined benchmark problems with manufactured solutions are utilized to demonstrate the method.
Sensitivity of global terrestrial ecosystems to climate variability.
Seddon, Alistair W R; Macias-Fauria, Marc; Long, Peter R; Benz, David; Willis, Kathy J
2016-03-10
The identification of properties that contribute to the persistence and resilience of ecosystems despite climate change constitutes a research priority of global relevance. Here we present a novel, empirical approach to assess the relative sensitivity of ecosystems to climate variability, one property of resilience that builds on theoretical modelling work recognizing that systems closer to critical thresholds respond more sensitively to external perturbations. We develop a new metric, the vegetation sensitivity index, that identifies areas sensitive to climate variability over the past 14 years. The metric uses time series data derived from the moderate-resolution imaging spectroradiometer (MODIS) enhanced vegetation index, and three climatic variables that drive vegetation productivity (air temperature, water availability and cloud cover). Underlying the analysis is an autoregressive modelling approach used to identify climate drivers of vegetation productivity on monthly timescales, in addition to regions with memory effects and reduced response rates to external forcing. We find ecologically sensitive regions with amplified responses to climate variability in the Arctic tundra, parts of the boreal forest belt, the tropical rainforest, alpine regions worldwide, steppe and prairie regions of central Asia and North and South America, the Caatinga deciduous forest in eastern South America, and eastern areas of Australia. Our study provides a quantitative methodology for assessing the relative response rate of ecosystems--be they natural or with a strong anthropogenic signature--to environmental variability, which is the first step towards addressing why some regions appear to be more sensitive than others, and what impact this has on the resilience of ecosystem service provision and human well-being. PMID:26886790
Sensitivity of global terrestrial ecosystems to climate variability
NASA Astrophysics Data System (ADS)
Seddon, Alistair W. R.; Macias-Fauria, Marc; Long, Peter R.; Benz, David; Willis, Kathy J.
2016-03-01
The identification of properties that contribute to the persistence and resilience of ecosystems despite climate change constitutes a research priority of global relevance. Here we present a novel, empirical approach to assess the relative sensitivity of ecosystems to climate variability, one property of resilience that builds on theoretical modelling work recognizing that systems closer to critical thresholds respond more sensitively to external perturbations. We develop a new metric, the vegetation sensitivity index, that identifies areas sensitive to climate variability over the past 14 years. The metric uses time series data derived from the moderate-resolution imaging spectroradiometer (MODIS) enhanced vegetation index, and three climatic variables that drive vegetation productivity (air temperature, water availability and cloud cover). Underlying the analysis is an autoregressive modelling approach used to identify climate drivers of vegetation productivity on monthly timescales, in addition to regions with memory effects and reduced response rates to external forcing. We find ecologically sensitive regions with amplified responses to climate variability in the Arctic tundra, parts of the boreal forest belt, the tropical rainforest, alpine regions worldwide, steppe and prairie regions of central Asia and North and South America, the Caatinga deciduous forest in eastern South America, and eastern areas of Australia. Our study provides a quantitative methodology for assessing the relative response rate of ecosystems—be they natural or with a strong anthropogenic signature—to environmental variability, which is the first step towards addressing why some regions appear to be more sensitive than others, and what impact this has on the resilience of ecosystem service provision and human well-being.
Antony, Hiasindh Ashmi; Pathak, Vrushali; Parija, Subhash Chandra; Ghosh, Kanjaksha; Bhattacherjee, Amrita
2016-07-01
Increasing drug resistance in Plasmodium falciparum is an important global health burden because it reverses the malarial control achieved so far. Hence, understanding the molecular mechanisms of drug resistance is the epicenter of the development agenda for novel diagnostic and therapeutic (drugs/vaccines) targets for malaria. In this study, we report global comparative transcriptome profiling (RNA-Seq) to characterize the difference in the transcriptome between 48-h intraerythrocytic stage of chloroquine-sensitive and chloroquine-resistant P. falciparum (3D7 and Dd2) strains. The two P. falciparum 3D7 and Dd2 strains have distant geographical origin, the Netherlands and Indochina, respectively. The strains were cultured by an in vitro method and harvested at the 48-h intraerythrocytic stage having 5% parasitemia. The whole transcriptome sequencing was performed using Illumina HiSeq 2500 platform with paired-end reads. The reads were aligned with the reference P. falciparum genome. The alignment percentages for 3D7, Dd2, and Dd2 w/CQ strains were 85.40%, 89.13%, and 84%, respectively. Nearly 40% of the transcripts had known gene function, whereas the remaining genes (about 60%) had unknown function. The genes involved in immune evasion showed a significant difference between the strains. The differential gene expression between the sensitive and resistant strains was measured using the cuffdiff program with the p-value cutoff ≤0.05. Collectively, this study identified differentially expressed genes between 3D7 and Dd2 strains, where we found 89 genes to be upregulated and 227 to be downregulated. On the contrary, for 3D7 and Dd2 w/CQ strains, 45 genes were upregulated and 409 were downregulated. These differentially regulated genes code, by and large, for surface antigens involved in invasion, pathogenesis, and host-parasite interactions, among others. The exhibition of transcriptional differences between these strains of P. falciparum contributes to our
Multidisciplinary optimization of controlled space structures with global sensitivity equations
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; James, Benjamin B.; Graves, Philip C.; Woodard, Stanley E.
1991-01-01
A new method for the preliminary design of controlled space structures is presented. The method coordinates standard finite element structural analysis, multivariable controls, and nonlinear programming codes and allows simultaneous optimization of the structures and control systems of a spacecraft. Global sensitivity equations are a key feature of this method. The preliminary design of a generic geostationary platform is used to demonstrate the multidisciplinary optimization method. Fifteen design variables are used to optimize truss member sizes and feedback gain values. The goal is to reduce the total mass of the structure and the vibration control system while satisfying constraints on vibration decay rate. Incorporating the nonnegligible mass of actuators causes an essential coupling between structural design variables and control design variables. The solution of the demonstration problem is an important step toward a comprehensive preliminary design capability for structures and control systems. Use of global sensitivity equations helps solve optimization problems that have a large number of design variables and a high degree of coupling between disciplines.
Coser, Kathryn R.; Chesnes, Jessica; Hur, Jingyung; Ray, Sandip; Isselbacher, Kurt J.; Shioda, Toshi
2003-01-01
To obtain comprehensive information on 17β-estradiol (E2) sensitivity of genes that are inducible or suppressible by this hormone, we designed a method that determines ligand sensitivities of large numbers of genes by using DNA microarray and a set of simple Perl computer scripts implementing the standard metric statistics. We used it to characterize effects of low (0–100 pM) concentrations of E2 on the transcriptome profile of MCF7/BUS human breast cancer cells, whose E2 dose-dependent growth curve saturated with 100 pM E2. Evaluation of changes in mRNA expression for all genes covered by the DNA microarray indicated that, at a very low concentration (10 pM), E2 suppressed ≈3–5 times larger numbers of genes than it induced, whereas at higher concentrations (30–100 pM) it induced ≈1.5–2 times more genes than it suppressed. Using clearly defined statistical criteria, E2-inducible genes were categorized into several classes based on their E2 sensitivities. This approach of hormone sensitivity analysis revealed that expression of two previously reported E2-inducible autocrine growth factors, transforming growth factor α and stromal cell-derived factor 1, was not affected by 100 pM and lower concentrations of E2 but strongly enhanced by 10 nM E2, which was far higher than the concentration that saturated the E2 dose-dependent growth curve of MCF7/BUS cells. These observations suggested that biological actions of E2 are derived from expression of multiple genes whose E2 sensitivities differ significantly and, hence, depend on the E2 concentration, especially when it is lower than the saturating level, emphasizing the importance of characterizing the ligand dosedependent aspects of E2 actions. PMID:14610279
Coser, Kathryn R; Chesnes, Jessica; Hur, Jingyung; Ray, Sandip; Isselbacher, Kurt J; Shioda, Toshi
2003-11-25
To obtain comprehensive information on 17beta-estradiol (E2) sensitivity of genes that are inducible or suppressible by this hormone, we designed a method that determines ligand sensitivities of large numbers of genes by using DNA microarray and a set of simple Perl computer scripts implementing the standard metric statistics. We used it to characterize effects of low (0-100 pM) concentrations of E2 on the transcriptome profile of MCF7/BUS human breast cancer cells, whose E2 dose-dependent growth curve saturated with 100 pM E2. Evaluation of changes in mRNA expression for all genes covered by the DNA microarray indicated that, at a very low concentration (10 pM), E2 suppressed approximately 3-5 times larger numbers of genes than it induced, whereas at higher concentrations (30-100 pM) it induced approximately 1.5-2 times more genes than it suppressed. Using clearly defined statistical criteria, E2-inducible genes were categorized into several classes based on their E2 sensitivities. This approach of hormone sensitivity analysis revealed that expression of two previously reported E2-inducible autocrine growth factors, transforming growth factor alpha and stromal cell-derived factor 1, was not affected by 100 pM and lower concentrations of E2 but strongly enhanced by 10 nM E2, which was far higher than the concentration that saturated the E2 dose-dependent growth curve of MCF7/BUS cells. These observations suggested that biological actions of E2 are derived from expression of multiple genes whose E2 sensitivities differ significantly and, hence, depend on the E2 concentration, especially when it is lower than the saturating level, emphasizing the importance of characterizing the ligand dose-dependent aspects of E2 actions. PMID:14610279
Extended Forward Sensitivity Analysis for Uncertainty Quantification
Haihua Zhao; Vincent A. Mousseau
2011-09-01
Verification and validation (V&V) are playing more important roles to quantify uncertainties and realize high fidelity simulations in engineering system analyses, such as transients happened in a complex nuclear reactor system. Traditional V&V in the reactor system analysis focused more on the validation part or did not differentiate verification and validation. The traditional approach to uncertainty quantification is based on a 'black box' approach. The simulation tool is treated as an unknown signal generator, a distribution of inputs according to assumed probability density functions is sent in and the distribution of the outputs is measured and correlated back to the original input distribution. The 'black box' method mixes numerical errors with all other uncertainties. It is also not efficient to perform sensitivity analysis. Contrary to the 'black box' method, a more efficient sensitivity approach can take advantage of intimate knowledge of the simulation code. In these types of approaches equations for the propagation of uncertainty are constructed and the sensitivities are directly solved for as variables in the simulation. This paper presents the forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical parameters, the simulation is allowed
Elahi, Elahe; Kumm, Jochen; Ronaghi, Mostafa
2004-01-31
The introduction of molecular markers in genetic analysis has revolutionized medicine. These molecular markers are genetic variations associated with a predisposition to common diseases and individual variations in drug responses. Identification and genotyping a vast number of genetic polymorphisms in large populations are increasingly important for disease gene identification, pharmacogenetics and population-based studies. Among variations being analyzed, single nucleotide polymorphisms seem to be most useful in large-scale genetic analysis. This review discusses approaches for genetic analysis, use of different markers, and emerging technologies for large-scale genetic analysis where millions of genotyping need to be performed. PMID:14761299
Comparative Sensitivity Analysis of Muscle Activation Dynamics
Rockenfeller, Robert; Günther, Michael; Schmitt, Syn; Götz, Thomas
2015-01-01
We mathematically compared two models of mammalian striated muscle activation dynamics proposed by Hatze and Zajac. Both models are representative for a broad variety of biomechanical models formulated as ordinary differential equations (ODEs). These models incorporate parameters that directly represent known physiological properties. Other parameters have been introduced to reproduce empirical observations. We used sensitivity analysis to investigate the influence of model parameters on the ODE solutions. In addition, we expanded an existing approach to treating initial conditions as parameters and to calculating second-order sensitivities. Furthermore, we used a global sensitivity analysis approach to include finite ranges of parameter values. Hence, a theoretician striving for model reduction could use the method for identifying particularly low sensitivities to detect superfluous parameters. An experimenter could use it for identifying particularly high sensitivities to improve parameter estimation. Hatze's nonlinear model incorporates some parameters to which activation dynamics is clearly more sensitive than to any parameter in Zajac's linear model. Other than Zajac's model, Hatze's model can, however, reproduce measured shifts in optimal muscle length with varied muscle activity. Accordingly we extracted a specific parameter set for Hatze's model that combines best with a particular muscle force-length relation. PMID:26417379
Comparative Sensitivity Analysis of Muscle Activation Dynamics.
Rockenfeller, Robert; Günther, Michael; Schmitt, Syn; Götz, Thomas
2015-01-01
We mathematically compared two models of mammalian striated muscle activation dynamics proposed by Hatze and Zajac. Both models are representative for a broad variety of biomechanical models formulated as ordinary differential equations (ODEs). These models incorporate parameters that directly represent known physiological properties. Other parameters have been introduced to reproduce empirical observations. We used sensitivity analysis to investigate the influence of model parameters on the ODE solutions. In addition, we expanded an existing approach to treating initial conditions as parameters and to calculating second-order sensitivities. Furthermore, we used a global sensitivity analysis approach to include finite ranges of parameter values. Hence, a theoretician striving for model reduction could use the method for identifying particularly low sensitivities to detect superfluous parameters. An experimenter could use it for identifying particularly high sensitivities to improve parameter estimation. Hatze's nonlinear model incorporates some parameters to which activation dynamics is clearly more sensitive than to any parameter in Zajac's linear model. Other than Zajac's model, Hatze's model can, however, reproduce measured shifts in optimal muscle length with varied muscle activity. Accordingly we extracted a specific parameter set for Hatze's model that combines best with a particular muscle force-length relation. PMID:26417379
Involute composite design evaluation using global design sensitivity derivatives
NASA Technical Reports Server (NTRS)
Hart, J. K.; Stanton, E. L.
1989-01-01
An optimization capability for involute structures has been developed. Its key feature is the use of global material geometry variables which are so chosen that all combinations of design variables within a set of lower and upper bounds correspond to manufacturable designs. A further advantage of global variables is that their number does not increase with increasing mesh density. The accuracy of the sensitivity derivatives has been verified both through finite difference tests and through the successful use of the derivatives by an optimizer. The state of the art in composite design today is still marked by point design algorithms linked together using ad hoc methods not directly related to a manufacturing procedure. The global design sensitivity approach presented here for involutes can be applied to filament wound shells and other composite constructions using material form features peculiar to each construction. The present involute optimization technology is being applied to the Space Shuttle SRM nozzle boot ring redesigns by PDA Engineering.
Connecting Local and Global Sensitivities in a Mathematical Model for Wound Healing.
Krishna, Nitin A; Pennington, Hannah M; Coppola, Canaan D; Eisenberg, Marisa C; Schugart, Richard C
2015-12-01
The process of wound healing is governed by complex interactions between proteins and the extracellular matrix, involving a range of signaling pathways. This study aimed to formulate, quantify, and analyze a mathematical model describing interactions among matrix metalloproteinases (MMP-1), their inhibitors (TIMP-1), and extracellular matrix in the healing of a diabetic foot ulcer. De-identified patient data for modeling were taken from Muller et al. (Diabet Med 25(4):419-426, 2008), a research outcome that collected average physiological data for two patient subgroups: "good healers" and "poor healers," where classification was based on rate of ulcer healing. Model parameters for the two patient subgroups were estimated using least squares. The model and parameter values were analyzed by conducting a steady-state analysis and both global and local sensitivity analyses. The global sensitivity analysis was performed using Latin hypercube sampling and partial rank correlation analysis, while local analysis was conducted through a classical sensitivity analysis followed by an SVD-QR subset selection. We developed a "local-to-global" analysis to compare the results of the sensitivity analyses. Our results show that the sensitivities of certain parameters are highly dependent on the size of the parameter space, suggesting that identifying physiological bounds may be critical in defining the sensitivities. PMID:26597096
An analysis of sensitivity tests
Neyer, B.T.
1992-03-06
A new method of analyzing sensitivity tests is proposed. It uses the Likelihood Ratio Test to compute regions of arbitrary confidence. It can calculate confidence regions for the parameters of the distribution (e.g., the mean, {mu}, and the standard deviation, {sigma}) as well as various percentiles. Unlike presently used methods, such as those based on asymptotic analysis, it can analyze the results of all sensitivity tests, and it does not significantly underestimate the size of the confidence regions. The main disadvantage of this method is that it requires much more computation to calculate the confidence regions. However, these calculations can be easily and quickly performed on most computers.
21st century runoff sensitivities of major global river basins
NASA Astrophysics Data System (ADS)
Tang, Qiuhong; Lettenmaier, Dennis P.
2012-03-01
River runoff is a key index of renewable water resources which affect almost all human and natural systems. Any substantial change in runoff will therefore have serious social, environmental, and ecological consequences. We estimate the runoff response to global mean temperature change implied by the climate change experiments generated for the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR4). In contrast to previous studies, we estimate the runoff sensitivity using global mean temperature change as an index of anthropogenic climate changes in temperature and precipitation, with the rationale that this removes the dependence on emissions scenarios. Our results show that the runoff sensitivity implied by the IPCC experiments is relatively stable across emissions scenarios and global mean temperature increments, but varies substantially across models with the exception of the high-latitudes and currently arid or semi-arid areas. The runoff sensitivities are slightly higher at 0.5°C warming than for larger amounts of warming. The estimated ratio of runoff change to (local) precipitation change (runoff elasticity) ranges from about one to three, and the runoff temperature sensitivity (change in runoff per degree C of local temperature increase) ranges from decreases of about 2 to 6% over most basins in North America and the middle and high latitudes of Eurasia.
Computational methods for global/local analysis
NASA Technical Reports Server (NTRS)
Ransom, Jonathan B.; Mccleary, Susan L.; Aminpour, Mohammad A.; Knight, Norman F., Jr.
1992-01-01
Computational methods for global/local analysis of structures which include both uncoupled and coupled methods are described. In addition, global/local analysis methodology for automatic refinement of incompatible global and local finite element models is developed. Representative structural analysis problems are presented to demonstrate the global/local analysis methods.
Diagnostic Analysis of Middle Atmosphere Climate Sensitivity
NASA Astrophysics Data System (ADS)
Zhu, X.; Cai, M.; Swartz, W. H.; Coy, L.; Yee, J.; Talaat, E. R.
2013-12-01
Both the middle atmosphere climate sensitivity associated with the cooling trend and its uncertainty due to a complex system of drivers increase with altitude. Furthermore, the combined effect of middle atmosphere cooling due to long-lived greenhouse gases and ozone is also associated with natural climate variations due to solar activity. To understand and predict climate change from a global perspective, we use the recently developed climate feedback-response analysis method (CFRAM) to identify and isolate the signals from the external forcing and from different feedback processes in the middle atmosphere climate system. By use of the JHU/APL middle atmosphere radiation algorithm, the CFRAM is applied to the model output fields of the high-altitude GEOS-5 climate model in the middle atmosphere to delineate the individual contributions of radiative forcing to middle atmosphere climate sensitivity.
Sensitivity and Uncertainty Analysis Shell
1999-04-20
SUNS (Sensitivity and Uncertainty Analysis Shell) is a 32-bit application that runs under Windows 95/98 and Windows NT. It is designed to aid in statistical analyses for a broad range of applications. The class of problems for which SUNS is suitable is generally defined by two requirements: 1. A computer code is developed or acquired that models some processes for which input is uncertain and the user is interested in statistical analysis of the outputmore » of that code. 2. The statistical analysis of interest can be accomplished using the Monte Carlo analysis. The implementation then requires that the user identify which input to the process model is to be manipulated for statistical analysis. With this information, the changes required to loosely couple SUNS with the process model can be completed. SUNS is then used to generate the required statistical sample and the user-supplied process model analyses the sample. The SUNS post processor displays statistical results from any existing file that contains sampled input and output values.« less
Sensitivity of regional climate to global temperature and forcing
NASA Astrophysics Data System (ADS)
Tebaldi, Claudia; O'Neill, Brian; Lamarque, Jean-François
2015-07-01
The sensitivity of regional climate to global average radiative forcing and temperature change is important for setting global climate policy targets and designing scenarios. Setting effective policy targets requires an understanding of the consequences exceeding them, even by small amounts, and the effective design of sets of scenarios requires the knowledge of how different emissions, concentrations, or forcing need to be in order to produce substantial differences in climate outcomes. Using an extensive database of climate model simulations, we quantify how differences in global average quantities relate to differences in both the spatial extent and magnitude of climate outcomes at regional (250-1250 km) scales. We show that differences of about 0.3 °C in global average temperature are required to generate statistically significant changes in regional annual average temperature over more than half of the Earth’s land surface. A global difference of 0.8 °C is necessary to produce regional warming over half the land surface that is not only significant but reaches at least 1 °C. As much as 2.5 to 3 °C is required for a statistically significant change in regional annual average precipitation that is equally pervasive. Global average temperature change provides a better metric than radiative forcing for indicating differences in regional climate outcomes due to the path dependency of the effects of radiative forcing. For example, a difference in radiative forcing of 0.5 W m-2 can produce statistically significant differences in regional temperature over an area that ranges between 30% and 85% of the land surface, depending on the forcing pathway.
Global thermohaline circulation. Part 1: Sensitivity to atmospheric moisture transport
Wang, X.; Stone, P.H.; Marotzke, J.
1999-01-01
A global ocean general circulation model of idealized geometry, combined with an atmospheric model based on observed transports of heat, momentum, and moisture, is used to explore the sensitivity of the global conveyor belt circulation to the surface freshwater fluxes, in particular the effects of meridional atmospheric moisture transports. The numerical results indicate that the equilibrium strength of the North Atlantic Deep Water (NADW) formation increases as the global freshwater transports increase. However, the global deep water formation--that is, the sum of the NADW and the Southern Ocean Deep Water formation rates--is relatively insensitive to changes of the freshwater flux. Perturbations to the meridional moisture transports of each hemisphere identify equatorially asymmetric effects of the freshwater fluxes. The results are consistent with box model results that the equilibrium NADW formation is primarily controlled by the magnitude of the Southern Hemisphere freshwater flux. However, the results show that the Northern Hemisphere freshwater flux has a strong impact on the transient behavior of the North Atlantic overturning. Increasing this flux leads to a collapse of the conveyor belt circulation, but the collapse is delayed if the Southern Hemisphere flux also increases. The perturbation experiments also illustrate that the rapidity of collapse is affected by random fluctuations in the wind stress field.
Using Dynamic Sensitivity Analysis to Assess Testability
NASA Technical Reports Server (NTRS)
Voas, Jeffrey; Morell, Larry; Miller, Keith
1990-01-01
This paper discusses sensitivity analysis and its relationship to random black box testing. Sensitivity analysis estimates the impact that a programming fault at a particular location would have on the program's input/output behavior. Locations that are relatively \\"insensitive" to faults can render random black box testing unlikely to uncover programming faults. Therefore, sensitivity analysis gives new insight when interpreting random black box testing results. Although sensitivity analysis is computationally intensive, it requires no oracle and no human intervention.
Ellouze, M; Gauchi, J-P; Augustin, J-C
2011-06-01
The aim of this study was to apply a global sensitivity analysis (SA) method in model simplification and to evaluate (eO)®, a biological Time Temperature Integrator (TTI) as a quality and safety indicator for cold smoked salmon (CSS). Models were thus developed to predict the evolutions of Listeria monocytogenes and the indigenous food flora in CSS and to predict TTIs endpoint. A global SA was then applied on the three models to identify the less important factors and simplify the models accordingly. Results showed that the subset of the most important factors of the three models was mainly composed of the durations and temperatures of two chill chain links, out of the control of the manufacturers: the domestic refrigerator and the retail/cabinet links. Then, the simplified versions of the three models were run with 10(4) time temperature profiles representing the variability associated to the microbial behavior, to the TTIs evolution and to the French chill chain characteristics. The results were used to assess the distributions of the microbial contaminations obtained at the TTI endpoint and at the end of the simulated profiles and proved that, in the case of poor storage conditions, the TTI use could reduce the number of unacceptable foods by 50%. PMID:21511136
Stiff DAE integrator with sensitivity analysis capabilities
2007-11-26
IDAS is a general purpose (serial and parallel) solver for differential equation (ODE) systems with senstivity analysis capabilities. It provides both forward and adjoint sensitivity analysis options.
Point Source Location Sensitivity Analysis
NASA Astrophysics Data System (ADS)
Cox, J. Allen
1986-11-01
This paper presents the results of an analysis of point source location accuracy and sensitivity as a function of focal plane geometry, optical blur spot, and location algorithm. Five specific blur spots are treated: gaussian, diffraction-limited circular aperture with and without central obscuration (obscured and clear bessinc, respectively), diffraction-limited rectangular aperture, and a pill box distribution. For each blur spot, location accuracies are calculated for square, rectangular, and hexagonal detector shapes of equal area. The rectangular detectors are arranged on a hexagonal lattice. The two location algorithms consist of standard and generalized centroid techniques. Hexagonal detector arrays are shown to give the best performance under a wide range of conditions.
Sensitivity of Local Temperature CDFs to Global Climate Change
NASA Astrophysics Data System (ADS)
Stainforth, D.; Chapman, S. C.; Watkins, N. W.
2011-12-01
The sensitivity of climate to increasing atmospheric greenhouse gases at the global scale has been much studied [Knutti and Hegerl 2008, and references therein]. Scientific information to support climate change adaptation activities, however, is often sought at regional or local scales; the scales on which most adaptation decisions are made. Information on these scales is most often based on simulations of complex climate models [Murphy et al. 2009, Tebaldi et al. 2005] and have questionable reliability [Stainforth et al., 2007]. Rather than using data derived or obtained from models we focus on observational timeseries to evaluate the sensitivity of different parts of the local climatic distribution. Such an approach has many advantages: it avoids issues relating to model imperfections [Stainforth et al. 2007], it can be focused on decision relevant thresholds [e.g. Porter and Semenov, 2005], and it inherently integrates information relating to local climatic influences. Taking a timeseries of local daily temperatures for various locations across the United Kingdom we extract the changing cumulative distribution functions over time. We present a simple mathematical deconstruction of how two different observations from two different time periods can be assigned to the combination of natural variability and/or the consequences of climate change. Using this deconstruction we analyse the changing shape of the distributions and thus the sensitivity of different quartiles of the distribution. These sensitivities are found to be both regionally consistent and geographically varying across the United Kingdom; as one would expect given the different influences on local climate between, say, Western Scotland and South East England. We nevertheless find a common pattern of increased sensitivity in the 60th to 80th percentiles; above the mean but below the greatest extremes. The method has the potential to be applied to many other variables in addition to temperature and to
LCA data quality: sensitivity and uncertainty analysis.
Guo, M; Murphy, R J
2012-10-01
Life cycle assessment (LCA) data quality issues were investigated by using case studies on products from starch-polyvinyl alcohol based biopolymers and petrochemical alternatives. The time horizon chosen for the characterization models was shown to be an important sensitive parameter for the environmental profiles of all the polymers. In the global warming potential and the toxicity potential categories the comparison between biopolymers and petrochemical counterparts altered as the time horizon extended from 20 years to infinite time. These case studies demonstrated that the use of a single time horizon provide only one perspective on the LCA outcomes which could introduce an inadvertent bias into LCA outcomes especially in toxicity impact categories and thus dynamic LCA characterization models with varying time horizons are recommended as a measure of the robustness for LCAs especially comparative assessments. This study also presents an approach to integrate statistical methods into LCA models for analyzing uncertainty in industrial and computer-simulated datasets. We calibrated probabilities for the LCA outcomes for biopolymer products arising from uncertainty in the inventory and from data variation characteristics this has enabled assigning confidence to the LCIA outcomes in specific impact categories for the biopolymer vs. petrochemical polymer comparisons undertaken. Uncertainty combined with the sensitivity analysis carried out in this study has led to a transparent increase in confidence in the LCA findings. We conclude that LCAs lacking explicit interpretation of the degree of uncertainty and sensitivities are of limited value as robust evidence for decision making or comparative assertions. PMID:22854094
Sensitivity of global model prediction to initial state uncertainty
NASA Astrophysics Data System (ADS)
Miguez-Macho, Gonzalo
The sensitivity of global and North American forecasts to uncertainties in the initial conditions is studied. The Utah Global Model is initialized with reanalysis data sets obtained from the National Centers for Environmental Prediction (NCEP) and the European Centre for Medium- Range Weather Forecasts (ECMWF). The differences between these analyses provide an estimate of initial uncertainty. The influence of certain scales of the initial uncertainty is tested in experiments with initial data change from NCEP to ECMWF reanalysis in a selected spectral band. Experiments are also done to determine the benefits of targeting local regions for forecast errors over North America. In these tests, NCEP initial data are replaced by ECMWF data in the considered region. The accuracy of predictions with initial data from either reanalysis only differs over the mid-latitudes of the Southern Hemisphere, where ECMWF initialized forecasts have somewhat greater skill. Results from the spectral experiments indicate that most of this benefit is explained by initial differences of the longwave components (wavenumbers 0-15). Approximately 67% of the 120-h global forecast difference produced by changing initial data from ECMWF to NCEP reanalyses is due to initial changes only in wavenumbers 0-15, and more than 85% of this difference is produced by initial changes in wavenumbers 0-20. The results suggest that large-scale errors of the initial state may play a more prominent role than suggested in some singular vector analyses, and favor global observational coverage to resolve the long waves. Results from the regional targeting experiments indicate that for forecast errors over North America, a systematic benefit comes only when the ``targeted'' region includes most of the north Pacific, pointing again at large scale errors as being prominent, even for midrange predictions over a local area.
Duret, Steven; Guillier, Laurent; Hoang, Hong-Minh; Flick, Denis; Laguerre, Onrawee
2014-06-16
Deterministic models describing heat transfer and microbial growth in the cold chain are widely studied. However, it is difficult to apply them in practice because of several variable parameters in the logistic supply chain (e.g., ambient temperature varying due to season and product residence time in refrigeration equipment), the product's characteristics (e.g., pH and water activity) and the microbial characteristics (e.g., initial microbial load and lag time). This variability can lead to different bacterial growth rates in food products and has to be considered to properly predict the consumer's exposure and identify the key parameters of the cold chain. This study proposes a new approach that combines deterministic (heat transfer) and stochastic (Monte Carlo) modeling to account for the variability in the logistic supply chain and the product's characteristics. The model generates a realistic time-temperature product history , contrary to existing modeling whose describe time-temperature profile Contrary to existing approaches that use directly a time-temperature profile, the proposed model predicts product temperature evolution from the thermostat setting and the ambient temperature. The developed methodology was applied to the cold chain of cooked ham including, the display cabinet, transport by the consumer and the domestic refrigerator, to predict the evolution of state variables, such as the temperature and the growth of Listeria monocytogenes. The impacts of the input factors were calculated and ranked. It was found that the product's time-temperature history and the initial contamination level are the main causes of consumers' exposure. Then, a refined analysis was applied, revealing the importance of consumer behaviors on Listeria monocytogenes exposure. PMID:24786551
Sensitivity of global wildfire occurrences to various factors in the context of global change
NASA Astrophysics Data System (ADS)
Huang, Yaoxian; Wu, Shiliang; Kaplan, Jed O.
2015-11-01
The occurrence of wildfires is very sensitive to fire meteorology, vegetation type and coverage. We investigate the potential impacts of global change (including changes in climate, land use/land cover, and population density) on wildfire frequencies over the period of 2000-2050. We account for the impacts associated with the changes in fire meteorology (such as temperature, precipitation, and relative humidity), vegetation density, as well as lightning and anthropogenic ignitions. Fire frequencies under the 2050 conditions are projected to increase by approximately 27% globally relative to the 2000 levels. Significant increases in fire occurrence are calculated over the Amazon area, Australia and Central Russia, while Southeast Africa shows a large decreasing trend due to significant increases in land use and population. Changes in fire meteorology driven by 2000-2050 climate change are found to increase the global annual total fires by around 19%. Modest increases (∼4%) in fire frequency at tropical regions are calculated in response to climate-driven changes in lightning activities, relative to the present-day levels. Changes in land cover by 2050 driven by climate change and increasing CO2 fertilization are expected to increase the global wildfire occurrences by 15% relative to the 2000 conditions while the 2000-2050 anthropogenic land use changes show little effects on global wildfire frequency. The 2000-2050 changes in global population are projected to reduce the total wildfires by about 7%. In general, changes in future fire meteorology plays the most important role in enhancing the future global wildfires, followed by land cover, lightning activities and land use while changes in population density exhibits the opposite effects during the period of 2000-2050.
Global Soil Moisture Analysis at DWD
NASA Astrophysics Data System (ADS)
Lange, M.
2012-04-01
Small errors in the daily forecast of precipitation, evaporation and runoff accumulate to uncertainties of soil water content and lead to systematic biases of temperature and humidity profiles in the boundary layer if no corrections are applied. A new soil moisture assimilation scheme has been developed for the global GME model and runs operationally since March 2011. As many other variational schemes implemented at NWP centers (e.g. Canadian Met Service, DWD, ECMWF,, Meteo France) the scheme is based on minimisation of screen level forecast errors by adjusting the soil water content implicitly correcting the partitioning of available energy into latent and sensible heat. The original method proposed by Mahfouf (1991) and described in Hess, 2001 requires at least two additional model forecast runs to calculate the gradient of the cost function i.e. the sensitivity dT2m/dwb with T2m as 2m temperature and wb as the soil water content of the respective top and bottom soil layers. To overcome this computational costly approach in the new scheme the sensitivity of screen level temperature on soil moisture changes is parameterized with derivatives of analytical relations for transpiration from vegetation and bare soil evaporation as motivated by Jacobs and De Bruin (1992). The comparison of both methods shows high correlation of the temperature sensitivity that justifies the approximation. The method will be described in detail and verification results will be presented to demonstrate the impact of soil moisture analysis in GME. Hess, R. 2001: Assimilation of screen-level observations by variational soil moisture analysis. Meteorol. Atmos. Phys. 77, 145-154. Jacobs, C.M.M. and H.A.R. De Bruin, 1992: The Sensitivity of Regional Transpiration to Land-Surface Characteristics: Significance of Feedback. J. Clim. 5, 683-698. Mahfouf, J-F. 1991. Analysis of soil moisture from near-surface parameters: A feasibility study. J. Appl. Meteorol. 30: 1534-1547.
Global analysis of intraplate basins
NASA Astrophysics Data System (ADS)
Heine, C.; Mueller, D. R.; Dyksterhuis, S.
2005-12-01
Broad intraplate sedimentary basins often show a mismatch of lithospheric extension factors compared to those inferred from sediment thickness and subsidence modelling, not conforming to the current understanding of rift basin evolution. Mostly, these basins are underlain by a very heterogeneous and structurally complex basement which has been formed as a product of Phanerozoic continent-continent or terrane/arc-continent collision and is usually referred to as being accretionary. Most likely, the basin-underlying substrate is one of the key factors controlling the style of extension. In order to investigate and model the geodynamic framework and mechanics controlling formation and evolution of these long-term depositional regions, we have been analysing a global set of more than 200 basins using various remotely sensed geophysical data sets and relational geospatial databases. We have compared elevation, crustal and sediment thickness, heatflow, crustal structure, basin ages and -geometries with computed differential beta, anomalous tectonic subsidence, and differential extension factor grids for these basins. The crust/mantle interactions in the basin regions are investigated using plate tectonic reconstructions in a mantle convection framework for the last 160 Ma. Characteristic parameters and patterns derived from this global analysis are then used to generate a classification scheme, to estimate the misfit between models derived from either crustal thinning or sediment thickness, and as input for extension models using particle-in-cell finite element codes. Basins with high differential extension values include the ``classical'' intraplate-basins, like the Michigan Basin in North America, the Zaire Basin in Africa, basins of the Arabian Penisula, and the West Siberian Basin. According to our global analysis so far, these basins show, that with increasing basin age, the amount of crustal extension vs. the extension values estimated from sediment thickness
A review of sensitivity analysis techniques
Hamby, D.M.
1993-12-31
Mathematical models are utilized to approximate various highly complex engineering, physical, environmental, social, and economic phenomena. Model parameters exerting the most influence on model results are identified through a {open_quotes}sensitivity analysis.{close_quotes} A comprehensive review is presented of more than a dozen sensitivity analysis methods. The most fundamental of sensitivity techniques utilizes partial differentiation whereas the simplest approach requires varying parameter values one-at-a-time. Correlation analysis is used to determine relationships between independent and dependent variables. Regression analysis provides the most comprehensive sensitivity measure and is commonly utilized to build response surfaces that approximate complex models.
Emulation of a complex global aerosol model to quantify sensitivity to uncertain parameters
NASA Astrophysics Data System (ADS)
Lee, L. A.; Carslaw, K. S.; Pringle, K. J.; Mann, G. W.; Spracklen, D. V.
2011-12-01
Sensitivity analysis of atmospheric models is necessary to identify the processes that lead to uncertainty in model predictions, to help understand model diversity through comparison of driving processes, and to prioritise research. Assessing the effect of parameter uncertainty in complex models is challenging and often limited by CPU constraints. Here we present a cost-effective application of variance-based sensitivity analysis to quantify the sensitivity of a 3-D global aerosol model to uncertain parameters. A Gaussian process emulator is used to estimate the model output across multi-dimensional parameter space, using information from a small number of model runs at points chosen using a Latin hypercube space-filling design. Gaussian process emulation is a Bayesian approach that uses information from the model runs along with some prior assumptions about the model behaviour to predict model output everywhere in the uncertainty space. We use the Gaussian process emulator to calculate the percentage of expected output variance explained by uncertainty in global aerosol model parameters and their interactions. To demonstrate the technique, we show examples of cloud condensation nuclei (CCN) sensitivity to 8 model parameters in polluted and remote marine environments as a function of altitude. In the polluted environment 95 % of the variance of CCN concentration is described by uncertainty in the 8 parameters (excluding their interaction effects) and is dominated by the uncertainty in the sulphur emissions, which explains 80 % of the variance. However, in the remote region parameter interaction effects become important, accounting for up to 40 % of the total variance. Some parameters are shown to have a negligible individual effect but a substantial interaction effect. Such sensitivities would not be detected in the commonly used single parameter perturbation experiments, which would therefore underpredict total uncertainty. Gaussian process emulation is shown to
Emulation of a complex global aerosol model to quantify sensitivity to uncertain parameters
NASA Astrophysics Data System (ADS)
Lee, L. A.; Carslaw, K. S.; Pringle, K.; Mann, G. W.; Spracklen, D. V.
2011-07-01
Sensitivity analysis of atmospheric models is necessary to identify the processes that lead to uncertainty in model predictions, to help understand model diversity, and to prioritise research. Assessing the effect of parameter uncertainty in complex models is challenging and often limited by CPU constraints. Here we present a cost-effective application of variance-based sensitivity analysis to quantify the sensitivity of a 3-D global aerosol model to uncertain parameters. A Gaussian process emulator is used to estimate the model output across multi-dimensional parameter space using information from a small number of model runs at points chosen using a Latin hypercube space-filling design. Gaussian process emulation is a Bayesian approach that uses information from the model runs along with some prior assumptions about the model behaviour to predict model output everywhere in the uncertainty space. We use the Gaussian process emulator to calculate the percentage of expected output variance explained by uncertainty in global aerosol model parameters and their interactions. To demonstrate the technique, we show examples of cloud condensation nuclei (CCN) sensitivity to 8 model parameters in polluted and remote marine environments as a function of altitude. In the polluted environment 95 % of the variance of CCN concentration is described by uncertainty in the 8 parameters (excluding their interaction effects) and is dominated by the uncertainty in the sulphur emissions, which explains 80 % of the variance. However, in the remote region parameter interaction effects become important, accounting for up to 40 % of the total variance. Some parameters are shown to have a negligible individual effect but a substantial interaction effect. Such sensitivities would not be detected in the commonly used single parameter perturbation experiments, which would therefore underpredict total uncertainty. Gaussian process emulation is shown to be an efficient and useful technique for
Design sensitivity analysis of nonlinear structural response
NASA Technical Reports Server (NTRS)
Cardoso, J. B.; Arora, J. S.
1987-01-01
A unified theory is described of design sensitivity analysis of linear and nonlinear structures for shape, nonshape and material selection problems. The concepts of reference volume and adjoint structure are used to develop the unified viewpoint. A general formula for design sensitivity analysis is derived. Simple analytical linear and nonlinear examples are used to interpret various terms of the formula and demonstrate its use.
Sensitivity of flood events to global climate change
NASA Astrophysics Data System (ADS)
Panagoulia, Dionysia; Dimou, George
1997-04-01
The sensitivity of Acheloos river flood events at the outfall of the mountainous Mesochora catchment in Central Greece was analysed under various scenarios of global climate change. The climate change pattern was simulated through a set of hypothetical and monthly GISS (Goddard Institute for Space Studies) scenarios of temperature increase coupled with precipitation changes. The daily outflow of the catchment, which is dominated by spring snowmelt runoff, was simulated by the coupling of snowmelt and soil moisture accounting models of the US National Weather Service River Forecast System. Two threshold levels were used to define a flood day—the double and triple long-term mean daily streamflow—and the flood parameters (occurrences, duration, magnitude, etc.) for these cases were determined. Despite the complicated response of flood events to temperature increase and threshold, both hypothetical and monthly GISS representations of climate change resulted in more and longer flood events for climates with increased precipitation. All climates yielded larger flood volumes and greater mean values of flood peaks with respect to precipitation increase. The lower threshold resulted in more and longer flood occurrences, as well as smaller flood volumes and peaks than those of the upper one. The combination of higher and frequent flood events could lead to greater risks of inudation and possible damage to structures. Furthermore, the winter swelling of the streamflow could increase erosion of the river bed and banks and hence modify the river profile.
Recent developments in structural sensitivity analysis
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.; Adelman, Howard M.
1988-01-01
Recent developments are reviewed in two major areas of structural sensitivity analysis: sensitivity of static and transient response; and sensitivity of vibration and buckling eigenproblems. Recent developments from the standpoint of computational cost, accuracy, and ease of implementation are presented. In the area of static response, current interest is focused on sensitivity to shape variation and sensitivity of nonlinear response. Two general approaches are used for computing sensitivities: differentiation of the continuum equations followed by discretization, and the reverse approach of discretization followed by differentiation. It is shown that the choice of methods has important accuracy and implementation implications. In the area of eigenproblem sensitivity, there is a great deal of interest and significant progress in sensitivity of problems with repeated eigenvalues. In addition to reviewing recent contributions in this area, the paper raises the issue of differentiability and continuity associated with the occurrence of repeated eigenvalues.
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin; Haghnegahdar, Amin
2016-04-01
Global sensitivity analysis (GSA) is a systems theoretic approach to characterizing the overall (average) sensitivity of one or more model responses across the factor space, by attributing the variability of those responses to different controlling (but uncertain) factors (e.g., model parameters, forcings, and boundary and initial conditions). GSA can be very helpful to improve the credibility and utility of Earth and Environmental System Models (EESMs), as these models are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. However, conventional approaches to GSA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we identify several important sensitivity-related characteristics of response surfaces that must be considered when investigating and interpreting the ''global sensitivity'' of a model response (e.g., a metric of model performance) to its parameters/factors. Accordingly, we present a new and general sensitivity and uncertainty analysis framework, Variogram Analysis of Response Surfaces (VARS), based on an analogy to 'variogram analysis', that characterizes a comprehensive spectrum of information on sensitivity. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices are contained within the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.
Sensitivity Analysis for some Water Pollution Problem
NASA Astrophysics Data System (ADS)
Le Dimet, François-Xavier; Tran Thu, Ha; Hussaini, Yousuff
2014-05-01
Sensitivity Analysis for Some Water Pollution Problems Francois-Xavier Le Dimet1 & Tran Thu Ha2 & M. Yousuff Hussaini3 1Université de Grenoble, France, 2Vietnamese Academy of Sciences, 3 Florida State University Sensitivity analysis employs some response function and the variable with respect to which its sensitivity is evaluated. If the state of the system is retrieved through a variational data assimilation process, then the observation appears only in the Optimality System (OS). In many cases, observations have errors and it is important to estimate their impact. Therefore, sensitivity analysis has to be carried out on the OS, and in that sense sensitivity analysis is a second order property. The OS can be considered as a generalized model because it contains all the available information. This presentation proposes a method to carry out sensitivity analysis in general. The method is demonstrated with an application to water pollution problem. The model involves shallow waters equations and an equation for the pollutant concentration. These equations are discretized using a finite volume method. The response function depends on the pollutant source, and its sensitivity with respect to the source term of the pollutant is studied. Specifically, we consider: • Identification of unknown parameters, and • Identification of sources of pollution and sensitivity with respect to the sources. We also use a Singular Evolutive Interpolated Kalman Filter to study this problem. The presentation includes a comparison of the results from these two methods. .
Extended Forward Sensitivity Analysis for Uncertainty Quantification
Haihua Zhao; Vincent A. Mousseau
2008-09-01
This report presents the forward sensitivity analysis method as a means for quantification of uncertainty in system analysis. The traditional approach to uncertainty quantification is based on a “black box” approach. The simulation tool is treated as an unknown signal generator, a distribution of inputs according to assumed probability density functions is sent in and the distribution of the outputs is measured and correlated back to the original input distribution. This approach requires large number of simulation runs and therefore has high computational cost. Contrary to the “black box” method, a more efficient sensitivity approach can take advantage of intimate knowledge of the simulation code. In this approach equations for the propagation of uncertainty are constructed and the sensitivity is solved for as variables in the same simulation. This “glass box” method can generate similar sensitivity information as the above “black box” approach with couples of runs to cover a large uncertainty region. Because only small numbers of runs are required, those runs can be done with a high accuracy in space and time ensuring that the uncertainty of the physical model is being measured and not simply the numerical error caused by the coarse discretization. In the forward sensitivity method, the model is differentiated with respect to each parameter to yield an additional system of the same size as the original one, the result of which is the solution sensitivity. The sensitivity of any output variable can then be directly obtained from these sensitivities by applying the chain rule of differentiation. We extend the forward sensitivity method to include time and spatial steps as special parameters so that the numerical errors can be quantified against other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty analysis. By knowing the relative sensitivity of time and space steps with other
Increased sensitivity to transient global ischemia in aging rat brain.
Xu, Kui; Sun, Xiaoyan; Puchowicz, Michelle A; LaManna, Joseph C
2007-01-01
Transient global brain ischemia induced by cardiac arrest and resuscitation (CAR) results in reperfusion injury associated with oxidative stress. Oxidative stress is known to produce delayed selective neuronal cell loss and impairment of brainstem function, leading to post-resuscitation mortality. Levels of 4-hydroxy-2-nonenal (HNE) modified protein adducts, a marker of oxidative stress, was found to be elevated after CAR in rat brain. In this study we investigated the effects of an antioxidant, alpha-phenyl-tert-butyl-nitrone (PBN) on the recovery following CAR in the aged rat brain. Male Fischer 344 rats (6, 12 and 24-month old) underwent 7-minute cardiac arrest before resuscitation. Brainstem function was assessed by hypoxic ventilatory response (HVR) and HNE-adducts were measured by western blot analysis. Our data showed that in the 24-month old rats, overall survival rate, hippocampal CAl neuronal counts and HVR were significantly reduced compared to the younger rats. With PBN treatment, the recovery was improved in the aged rat brain, which was consistent with reduced HNE adducts in brain following CAR. Our data suggest that aged rats are more vulnerable to oxidative stress insult and treatment with PBN improves the outcome following reperfusion injury. The mechanism of action is most likely through the scavenging of reactive oxygen species resulting in reduced lipid peroxidation. PMID:17727265
Coal Transportation Rate Sensitivity Analysis
2005-01-01
On December 21, 2004, the Surface Transportation Board (STB) requested that the Energy Information Administration (EIA) analyze the impact of changes in coal transportation rates on projected levels of electric power sector energy use and emissions. Specifically, the STB requested an analysis of changes in national and regional coal consumption and emissions resulting from adjustments in railroad transportation rates for Wyoming's Powder River Basin (PRB) coal using the National Energy Modeling System (NEMS). However, because NEMS operates at a relatively aggregate regional level and does not represent the costs of transporting coal over specific rail lines, this analysis reports on the impacts of interregional changes in transportation rates from those used in the Annual Energy Outlook 2005 (AEO2005) reference case.
Sensitivity Analysis of the Static Aeroelastic Response of a Wing
NASA Technical Reports Server (NTRS)
Eldred, Lloyd B.
1993-01-01
A technique to obtain the sensitivity of the static aeroelastic response of a three dimensional wing model is designed and implemented. The formulation is quite general and accepts any aerodynamic and structural analysis capability. A program to combine the discipline level, or local, sensitivities into global sensitivity derivatives is developed. A variety of representations of the wing pressure field are developed and tested to determine the most accurate and efficient scheme for representing the field outside of the aerodynamic code. Chebyshev polynomials are used to globally fit the pressure field. This approach had some difficulties in representing local variations in the field, so a variety of local interpolation polynomial pressure representations are also implemented. These panel based representations use a constant pressure value, a bilinearly interpolated value. or a biquadraticallv interpolated value. The interpolation polynomial approaches do an excellent job of reducing the numerical problems of the global approach for comparable computational effort. Regardless of the pressure representation used. sensitivity and response results with excellent accuracy have been produced for large integrated quantities such as wing tip deflection and trim angle of attack. The sensitivities of such things as individual generalized displacements have been found with fair accuracy. In general, accuracy is found to be proportional to the relative size of the derivatives to the quantity itself.
Sensitivity of global river discharges under Holocene and future climate conditions
NASA Astrophysics Data System (ADS)
Aerts, J. C. J. H.; Renssen, H.; Ward, P. J.; de Moel, H.; Odada, E.; Bouwer, L. M.; Goosse, H.
2006-10-01
A comparative analysis of global river basins shows that some river discharges are more sensitive to future climate change for the coming century than to natural climate variability over the last 9000 years. In these basins (Ganges, Mekong, Volta, Congo, Amazon, Murray-Darling, Rhine, Oder, Yukon) future discharges increase by 6-61%. These changes are of similar magnitude to changes over the last 9000 years. Some rivers (Nile, Syr Darya) experienced strong reductions in discharge over the last 9000 years (17-56%), but show much smaller responses to future warming. The simulation results for the last 9000 years are validated with independent proxy data.
Stochastic Simulations and Sensitivity Analysis of Plasma Flow
Lin, Guang; Karniadakis, George E.
2008-08-01
For complex physical systems with large number of random inputs, it will be very expensive to perform stochastic simulations for all of the random inputs. Stochastic sensitivity analysis is introduced in this paper to rank the significance of random inputs, provide information on which random input has more influence on the system outputs and the coupling or interaction effect among different random inputs. There are two types of numerical methods in stochastic sensitivity analysis: local and global methods. The local approach, which relies on a partial derivative of output with respect to parameters, is used to measure the sensitivity around a local operating point. When the system has strong nonlinearities and parameters fluctuate within a wide range from their nominal values, the local sensitivity does not provide full information to the system operators. On the other side, the global approach examines the sensitivity from the entire range of the parameter variations. The global screening methods, based on One-At-a-Time (OAT) perturbation of parameters, rank the significant parameters and identify their interaction among a large number of parameters. Several screening methods have been proposed in literature, i.e., the Morris method, Cotter's method, factorial experimentation, and iterated fractional factorial design. In this paper, the Morris method, Monte Carlo sampling method, Quasi-Monte Carlo method and collocation method based on sparse grids are studied. Additionally, two MHD examples are presented to demonstrate the capability and efficiency of the stochastic sensitivity analysis, which can be used as a pre-screening technique for reducing the dimensionality and hence the cost in stochastic simulations.
Sensitivity Analysis for Coupled Aero-structural Systems
NASA Technical Reports Server (NTRS)
Giunta, Anthony A.
1999-01-01
A novel method has been developed for calculating gradients of aerodynamic force and moment coefficients for an aeroelastic aircraft model. This method uses the Global Sensitivity Equations (GSE) to account for the aero-structural coupling, and a reduced-order modal analysis approach to condense the coupling bandwidth between the aerodynamic and structural models. Parallel computing is applied to reduce the computational expense of the numerous high fidelity aerodynamic analyses needed for the coupled aero-structural system. Good agreement is obtained between aerodynamic force and moment gradients computed with the GSE/modal analysis approach and the same quantities computed using brute-force, computationally expensive, finite difference approximations. A comparison between the computational expense of the GSE/modal analysis method and a pure finite difference approach is presented. These results show that the GSE/modal analysis approach is the more computationally efficient technique if sensitivity analysis is to be performed for two or more aircraft design parameters.
Sensitivity analysis for solar plates
NASA Technical Reports Server (NTRS)
Aster, R. W.
1986-01-01
Economic evaluation methods and analyses of emerging photovoltaic (PV) technology since 1976 was prepared. This type of analysis was applied to the silicon research portion of the PV Program in order to determine the importance of this research effort in relationship to the successful development of commercial PV systems. All four generic types of PV that use silicon were addressed: crystal ingots grown either by the Czochralski method or an ingot casting method; ribbons pulled directly from molten silicon; an amorphous silicon thin film; and use of high concentration lenses. Three technologies were analyzed: the Union Carbide fluidized bed reactor process, the Hemlock process, and the Union Carbide Komatsu process. The major components of each process were assessed in terms of the costs of capital equipment, labor, materials, and utilities. These assessments were encoded as the probabilities assigned by experts for achieving various cost values or production rates.
Sensitivity analysis for solar plates
NASA Astrophysics Data System (ADS)
Aster, R. W.
1986-02-01
Economic evaluation methods and analyses of emerging photovoltaic (PV) technology since 1976 was prepared. This type of analysis was applied to the silicon research portion of the PV Program in order to determine the importance of this research effort in relationship to the successful development of commercial PV systems. All four generic types of PV that use silicon were addressed: crystal ingots grown either by the Czochralski method or an ingot casting method; ribbons pulled directly from molten silicon; an amorphous silicon thin film; and use of high concentration lenses. Three technologies were analyzed: the Union Carbide fluidized bed reactor process, the Hemlock process, and the Union Carbide Komatsu process. The major components of each process were assessed in terms of the costs of capital equipment, labor, materials, and utilities. These assessments were encoded as the probabilities assigned by experts for achieving various cost values or production rates.
Multiple predictor smoothing methods for sensitivity analysis.
Helton, Jon Craig; Storlie, Curtis B.
2006-08-01
The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.
River Runoff Sensitivity in Eastern Siberia to Global Climate Warming
NASA Astrophysics Data System (ADS)
Georgiadi, A. G.; Milyukova, I. P.; Kashutina, E.
2008-12-01
During several last decades significant climate warming is observed in permafrost regions of Eastern Siberia. These changes include rise of air temperature as well as precipitation. Changes in regional climate are accompanied with river runoff changes. The analysis of the data shows that in the past 25 years, the largest contribution to the annual river runoff increase in the lower reaches of the Lena (Kyusyur) is made (in descending order) by the Lena river watershed (above Tabaga), the Aldan river (Okhotsky Perevoz), and the Vilyui river (Khatyryk-Khomo). The similar relation is also retained in the case of flood, with the seasonal river runoff of the Vilyui river being slightly decreased. Completely different relations are noted in winter, when a substantial river runoff increase is recorded in the lower reaches of the Lena river. In this case the major contribution to the winter river runoff increase in the Lena outlet is made by the winter river runoff increase on the Vilyui river. Unlike the above cases, the summer-fall river runoff in the lower reaches of the Lena river tends to decrease, which is similar to the trend exhibited by the Vilyui river. At the same time, the river runoff of the Lena (Tabaga) and Aldan (Verkhoyansky Perevoz) rivers increase. According to the results of hydrological modeling the expected anthropogenic climate warming in XXI century can bring more significant river runoff increase in the Lena river basin as compared with the recent one. Hydrological responses to climate warming have been evaluated for the plain part of the Lena river basin basing on a macroscale hydrological model featuring simplified description of processes developed in Institute of Geography of the Russian Academy of Sciences. Two atmosphere-ocean global circulation models included in the IPCC (ECHAM4/OPY3 and GFDL-R30) were used as scenarios of future global climate. According to the results of hydrological modeling the expected anthropogenic climate warming in
Sensitivity analysis of the critical speed in railway vehicle dynamics
NASA Astrophysics Data System (ADS)
Bigoni, D.; True, H.; Engsig-Karup, A. P.
2014-05-01
We present an approach to global sensitivity analysis aiming at the reduction of its computational cost without compromising the results. The method is based on sampling methods, cubature rules, high-dimensional model representation and total sensitivity indices. It is applied to a half car with a two-axle Cooperrider bogie, in order to study the sensitivity of the critical speed with respect to the suspension parameters. The importance of a certain suspension component is expressed by the variance in critical speed that is ascribable to it. This proves to be useful in the identification of parameters for which the accuracy of their values is critically important. The approach has a general applicability in many engineering fields and does not require the knowledge of the particular solver of the dynamical system. This analysis can be used as part of the virtual homologation procedure and to help engineers during the design phase of complex systems.
Glöser, Simon; Soulier, Marcel; Tercero Espinoza, Luis A
2013-06-18
We present a dynamic model of global copper stocks and flows which allows a detailed analysis of recycling efficiencies, copper stocks in use, and dissipated and landfilled copper. The model is based on historical mining and refined copper production data (1910-2010) enhanced by a unique data set of recent global semifinished goods production and copper end-use sectors provided by the copper industry. To enable the consistency of the simulated copper life cycle in terms of a closed mass balance, particularly the matching of recycled metal flows to reported historical annual production data, a method was developed to estimate the yearly global collection rates of end-of-life (postconsumer) scrap. Based on this method, we provide estimates of 8 different recycling indicators over time. The main indicator for the efficiency of global copper recycling from end-of-life (EoL) scrap--the EoL recycling rate--was estimated to be 45% on average, ± 5% (one standard deviation) due to uncertainty and variability over time in the period 2000-2010. As uncertainties of specific input data--mainly concerning assumptions on end-use lifetimes and their distribution--are high, a sensitivity analysis with regard to the effect of uncertainties in the input data on the calculated recycling indicators was performed. The sensitivity analysis included a stochastic (Monte Carlo) uncertainty evaluation with 10(5) simulation runs. PMID:23725041
Adjoint sensitivity analysis of an ultrawideband antenna
Stephanson, M B; White, D A
2011-07-28
The frequency domain finite element method using H(curl)-conforming finite elements is a robust technique for full-wave analysis of antennas. As computers become more powerful, it is becoming feasible to not only predict antenna performance, but also to compute sensitivity of antenna performance with respect to multiple parameters. This sensitivity information can then be used for optimization of the design or specification of manufacturing tolerances. In this paper we review the Adjoint Method for sensitivity calculation, and apply it to the problem of optimizing a Ultrawideband antenna.
Sensitivity Analysis in the Model Web
NASA Astrophysics Data System (ADS)
Jones, R.; Cornford, D.; Boukouvalas, A.
2012-04-01
The Model Web, and in particular the Uncertainty enabled Model Web being developed in the UncertWeb project aims to allow model developers and model users to deploy and discover models exposed as services on the Web. In particular model users will be able to compose model and data resources to construct and evaluate complex workflows. When discovering such workflows and models on the Web it is likely that the users might not have prior experience of the model behaviour in detail. It would be particularly beneficial if users could undertake a sensitivity analysis of the models and workflows they have discovered and constructed to allow them to assess the sensitivity to their assumptions and parameters. This work presents a Web-based sensitivity analysis tool which provides computationally efficient sensitivity analysis methods for models exposed on the Web. In particular the tool is tailored to the UncertWeb profiles for both information models (NetCDF and Observations and Measurements) and service specifications (WPS and SOAP/WSDL). The tool employs emulation technology where this is found to be possible, constructing statistical surrogate models for the models or workflows, to allow very fast variance based sensitivity analysis. Where models are too complex for emulation to be possible, or evaluate too fast for this to be necessary the original models are used with a carefully designed sampling strategy. A particular benefit of constructing emulators of the models or workflow components is that within the framework these can be communicated and evaluated at any physical location. The Web-based tool and backend API provide several functions to facilitate the process of creating an emulator and performing sensitivity analysis. A user can select a model exposed on the Web and specify the input ranges. Once this process is complete, they are able to perform screening to discover important inputs, train an emulator, and validate the accuracy of the trained emulator. In
NASA Astrophysics Data System (ADS)
Ichii, Kazuhito; Matsui, Yohei; Murakami, Kazutaka; Mukai, Toshikazu; Yamaguchi, Yasushi; Ogawa, Katsuro
2003-04-01
A simple Earth system model, the Four-Spheres Cycle of Energy and Mass (4-SCEM) model, has been developed to simulate global warming due to anthropogenic CO2 emission. The model consists of the Atmosphere-Earth Heat Cycle (AEHC) model, the Four Spheres Carbon Cycle (4-SCC) model, and their feedback processes. The AEHC model is a one-dimensional radiative convective model, which includes the greenhouse effect of CO2 and H2O, and one cloud layer. The 4-SCC model is a box-type carbon cycle model, which includes biospheric CO2 fertilization, vegetation area variation, the vegetation light saturation effect and the HILDA oceanic carbon cycle model. The feedback processes between carbon cycle and climate considered in the model are temperature dependencies of water vapor content, soil decomposition and ocean surface chemistry. The future status of the global carbon cycle and climate was simulated up to the year 2100 based on the "business as usual" (IS92a) emission scenario, followed by a linear decline in emissions to zero in the year 2200. The atmospheric CO2 concentration reaches 645 ppmv in 2100 and a peak of 760 ppmv approximately in the year 2170, and becomes a steady state with 600 ppmv. The projected CO2 concentration was lower than those of the past carbon cycle studies, because we included the light saturation effect of vegetation. The sensitivity analysis showed that uncertainties derived from the light saturation effect of vegetation and land use CO2 emissions were the primary cause of uncertainties in projecting future CO2 concentrations. The climate feedback effects showed rather small sensitivities compared with the impacts of those two effects. Satellite-based net primary production trends analyses can somewhat decrease the uncertainty in quantifying CO2 emissions due to land use changes. On the other hand, as the estimated parameter in vegetation light saturation was poorly constrained, we have to quantify and constrain the effect more accurately.
NASA Astrophysics Data System (ADS)
DeAngelis, A. M.; Qu, X.; Hall, A. D.; Klein, S. A.
2014-12-01
The hydrological cycle is expected to undergo substantial changes in response to global warming, with all climate models predicting an increase in global-mean precipitation. There is considerable spread among models, however, in the projected increase of global-mean precipitation, even when normalized by surface temperature change. In an attempt to develop a better physical understanding of the causes of this intermodel spread, we investigate the rapid and temperature-mediated responses of global-mean precipitation to CO2 forcing in an ensemble of CMIP5 models by applying regression analysis to pre-industrial and abrupt quadrupled CO2 simulations, and focus on the atmospheric radiative terms that balance global precipitation. The intermodel spread in the temperature-mediated component, which dominates the spread in total hydrological sensitivity, is highly correlated with the spread in temperature-mediated clear-sky shortwave (SW) atmospheric heating among models. Upon further analysis of the sources of intermodel variability in SW heating, we find that increases of upper atmosphere and (to a lesser extent) total column water vapor in response to 1K surface warming only partly explain intermodel differences in the SW response. Instead, most of the spread in the SW heating term is explained by intermodel differences in the sensitivity of SW absorption to fixed changes in column water vapor. This suggests that differences in SW radiative transfer codes among models are the dominant source of variability in the response of atmospheric SW heating to warming. Better understanding of the SW heating sensitivity to water vapor in climate models appears to be critical for reducing uncertainty in the global hydrological response to future warming. Current work entails analysis of observations to potentially constrain the intermodel spread in SW sensitivity to water vapor, as well as more detailed investigation of the radiative transfer schemes in different models and how
Sensitivity analysis and application in exploration geophysics
NASA Astrophysics Data System (ADS)
Tang, R.
2013-12-01
In exploration geophysics, the usual way of dealing with geophysical data is to form an Earth model describing underground structure in the area of investigation. The resolved model, however, is based on the inversion of survey data which is unavoidable contaminated by various noises and is sampled in a limited number of observation sites. Furthermore, due to the inherent non-unique weakness of inverse geophysical problem, the result is ambiguous. And it is not clear that which part of model features is well-resolved by the data. Therefore the interpretation of the result is intractable. We applied a sensitivity analysis to address this problem in magnetotelluric(MT). The sensitivity, also named Jacobian matrix or the sensitivity matrix, is comprised of the partial derivatives of the data with respect to the model parameters. In practical inversion, the matrix can be calculated by direct modeling of the theoretical response for the given model perturbation, or by the application of perturbation approach and reciprocity theory. We now acquired visualized sensitivity plot by calculating the sensitivity matrix and the solution is therefore under investigation that the less-resolved part is indicated and should not be considered in interpretation, while the well-resolved parameters can relatively be convincing. The sensitivity analysis is hereby a necessary and helpful tool for increasing the reliability of inverse models. Another main problem of exploration geophysics is about the design strategies of joint geophysical survey, i.e. gravity, magnetic & electromagnetic method. Since geophysical methods are based on the linear or nonlinear relationship between observed data and subsurface parameters, an appropriate design scheme which provides maximum information content within a restricted budget is quite difficult. Here we firstly studied sensitivity of different geophysical methods by mapping the spatial distribution of different survey sensitivity with respect to the
Cultural Sensitivity: The Key to Teaching Global Business.
ERIC Educational Resources Information Center
Timm, Judee A.
2003-01-01
More ethical practices in business begin with ethical training in business schools. International business education classes can compare corporate codes and actual behavior; explore the role of cultural differences in values, principles, and standards; and analyze ethical dilemmas in a global environment. (SK)
SEP thrust subsystem performance sensitivity analysis
NASA Technical Reports Server (NTRS)
Atkins, K. L.; Sauer, C. G., Jr.; Kerrisk, D. J.
1973-01-01
This is a two-part report on solar electric propulsion (SEP) performance sensitivity analysis. The first part describes the preliminary analysis of the SEP thrust system performance for an Encke rendezvous mission. A detailed description of thrust subsystem hardware tolerances on mission performance is included together with nominal spacecraft parameters based on these tolerances. The second part describes the method of analysis and graphical techniques used in generating the data for Part 1. Included is a description of both the trajectory program used and the additional software developed for this analysis. Part 2 also includes a comprehensive description of the use of the graphical techniques employed in this performance analysis.
Probabilistic sensitivity analysis in health economics.
Baio, Gianluca; Dawid, A Philip
2015-12-01
Health economic evaluations have recently become an important part of the clinical and medical research process and have built upon more advanced statistical decision-theoretic foundations. In some contexts, it is officially required that uncertainty about both parameters and observable variables be properly taken into account, increasingly often by means of Bayesian methods. Among these, probabilistic sensitivity analysis has assumed a predominant role. The objective of this article is to review the problem of health economic assessment from the standpoint of Bayesian statistical decision theory with particular attention to the philosophy underlying the procedures for sensitivity analysis. PMID:21930515
A numerical comparison of sensitivity analysis techniques
Hamby, D.M.
1993-12-31
Engineering and scientific phenomena are often studied with the aid of mathematical models designed to simulate complex physical processes. In the nuclear industry, modeling the movement and consequence of radioactive pollutants is extremely important for environmental protection and facility control. One of the steps in model development is the determination of the parameters most influential on model results. A {open_quotes}sensitivity analysis{close_quotes} of these parameters is not only critical to model validation but also serves to guide future research. A previous manuscript (Hamby) detailed many of the available methods for conducting sensitivity analyses. The current paper is a comparative assessment of several methods for estimating relative parameter sensitivity. Method practicality is based on calculational ease and usefulness of the results. It is the intent of this report to demonstrate calculational rigor and to compare parameter sensitivity rankings resulting from various sensitivity analysis techniques. An atmospheric tritium dosimetry model (Hamby) is used here as an example, but the techniques described can be applied to many different modeling problems. Other investigators (Rose; Dalrymple and Broyd) present comparisons of sensitivity analyses methodologies, but none as comprehensive as the current work.
Pediatric Pain, Predictive Inference, and Sensitivity Analysis.
ERIC Educational Resources Information Center
Weiss, Robert
1994-01-01
Coping style and effects of counseling intervention on pain tolerance was studied for 61 elementary school students through immersion of hands in cold water. Bayesian predictive inference tools are able to distinguish between subject characteristics and manipulable treatments. Sensitivity analysis strengthens the certainty of conclusions about…
NASA Astrophysics Data System (ADS)
Bernstein, Diana N.; Neelin, J. David
2016-06-01
A branch-run perturbed-physics ensemble in the Community Earth System Model estimates impacts of parameters in the deep convection scheme on current hydroclimate and on end-of-century precipitation change projections under global warming. Regional precipitation change patterns prove highly sensitive to these parameters, especially in the tropics with local changes exceeding 3 mm/d, comparable to the magnitude of the predicted change and to differences in global warming predictions among the Coupled Model Intercomparison Project phase 5 models. This sensitivity is distributed nonlinearly across the feasible parameter range, notably in the low-entrainment range of the parameter for turbulent entrainment in the deep convection scheme. This suggests that a useful target for parameter sensitivity studies is to identify such disproportionately sensitive "dangerous ranges." The low-entrainment range is used to illustrate the reduction in global warming regional precipitation sensitivity that could occur if this dangerous range can be excluded based on evidence from current climate.
Sparing of Sensitivity to Biological Motion but Not of Global Motion after Early Visual Deprivation
ERIC Educational Resources Information Center
Hadad, Bat-Sheva; Maurer, Daphne; Lewis, Terri L.
2012-01-01
Patients deprived of visual experience during infancy by dense bilateral congenital cataracts later show marked deficits in the perception of global motion (dorsal visual stream) and global form (ventral visual stream). We expected that they would also show marked deficits in sensitivity to biological motion, which is normally processed in the…
NIR sensitivity analysis with the VANE
NASA Astrophysics Data System (ADS)
Carrillo, Justin T.; Goodin, Christopher T.; Baylot, Alex E.
2016-05-01
Near infrared (NIR) cameras, with peak sensitivity around 905-nm wavelengths, are increasingly used in object detection applications such as pedestrian detection, occupant detection in vehicles, and vehicle detection. In this work, we present the results of simulated sensitivity analysis for object detection with NIR cameras. The analysis was conducted using high performance computing (HPC) to determine the environmental effects on object detection in different terrains and environmental conditions. The Virtual Autonomous Navigation Environment (VANE) was used to simulate highresolution models for environment, terrain, vehicles, and sensors. In the experiment, an active fiducial marker was attached to the rear bumper of a vehicle. The camera was mounted on a following vehicle that trailed at varying standoff distances. Three different terrain conditions (rural, urban, and forest), two environmental conditions (clear and hazy), three different times of day (morning, noon, and evening), and six different standoff distances were used to perform the sensor sensitivity analysis. The NIR camera that was used for the simulation is the DMK firewire monochrome on a pan-tilt motor. Standoff distance was varied along with environment and environmental conditions to determine the critical failure points for the sensor. Feature matching was used to detect the markers in each frame of the simulation, and the percentage of frames in which one of the markers was detected was recorded. The standoff distance produced the biggest impact on the performance of the camera system, while the camera system was not sensitive to environment conditions.
Geothermal well cost sensitivity analysis: current status
Carson, C.C.; Lin, Y.T.
1980-01-01
The geothermal well-cost model developed by Sandia National Laboratories is being used to analyze the sensitivity of well costs to improvements in geothermal drilling technology. Three interim results from this modeling effort are discussed. The sensitivity of well costs to bit parameters, rig parameters, and material costs; an analysis of the cost reduction potential of an advanced bit; and a consideration of breakeven costs for new cementing technology. All three results illustrate that the well-cost savings arising from any new technology will be highly site-dependent but that in specific wells the advances considered can result in significant cost reductions.
Sensitivity analysis for magnetic induction tomography.
Soleimani, Manuchehr; Jersey-Willuhn, Karen
2004-01-01
This work focuses on sensitivity analysis of magnetic induction tomography in terms of theoretical modelling and numerical implementation. We will explain a new and efficient method to determine the Jacobian matrix, directly from the results of the forward solution. The results presented are for the eddy current approximation, and are given in terms of magnetic vector potential, which is computationally convenient, and which may be extracted directly from the FE solution of the forward problem. Examples of sensitivity maps for an opposite sensor geometry are also shown. PMID:17271947
MUSE instrument global performance analysis
NASA Astrophysics Data System (ADS)
Loupias, M.; Bacon, R.; Caillier, P.; Fleischmann, A.; Jarno, A.; Kelz, A.; Kosmalski, J.; Laurent, F.; Le Floch, M.; Lizon, J. L.; Manescau, A.; Nicklas, H.; Parès, L.; Pécontal, A.; Reiss, R.; Remillieux, A.; Renault, E.; Roth, M. M.; Rupprecht, G.; Stuik, R.
2010-07-01
MUSE (Multi Unit Spectroscopic Explorer) is a second generation instrument developed for ESO (European Southern Observatory) and will be assembled to the VLT (Very Large Telescope) in 2012. The MUSE instrument can simultaneously record 90.000 spectra in the visible wavelength range (465-930nm), across a 1*1arcmin2 field of view, thanks to 24 identical Integral Field Units (IFU). A collaboration of 7 institutes has successfully passed the Final Design Review and is currently working on the first sub-assemblies. The sharing of performances has been based on 5 main functional sub-systems. The Fore Optics sub-system derotates and anamorphoses the VLT Nasmyth focal plane image, the Splitting and Relay Optics associated with the Main Structure are feeding each IFU with 1/24th of the field of view. Each IFU is composed of a 3D function insured by an image slicer system and a spectrograph, and a detection function by a 4k*4k CCD cooled down to 163°K. The 5th function is the calibration and data reduction of the instrument. This article depicts the breakdown of performances between these sub-systems (throughput, image quality...), and underlines the constraining parameters of the interfaces either internal or with the VLT. The validation of all these requirements is a critical task started a few months ago which requires a clear traceability and performances analysis.
Sensitivity analysis techniques for models of human behavior.
Bier, Asmeret Brooke
2010-09-01
Human and social modeling has emerged as an important research area at Sandia National Laboratories due to its potential to improve national defense-related decision-making in the presence of uncertainty. To learn about which sensitivity analysis techniques are most suitable for models of human behavior, different promising methods were applied to an example model, tested, and compared. The example model simulates cognitive, behavioral, and social processes and interactions, and involves substantial nonlinearity, uncertainty, and variability. Results showed that some sensitivity analysis methods create similar results, and can thus be considered redundant. However, other methods, such as global methods that consider interactions between inputs, can generate insight not gained from traditional methods.
Evaluating the sensitivity of local temperature distributions to global climate change
NASA Astrophysics Data System (ADS)
Chapman, S. C.; Stainforth, D. A.; Watkins, N. W.
2012-04-01
Climate change adaptation activities takes place at regional and local scales. The sensitivity of climate to increasing greenhouse gases is, however, most often studied at the global scale [Knutti and Hegerl 2008, and references therein]. At adaptation relevant spatial scales information is most often based on simulations of complex climate models [Murphy et al. 2009, Tebaldi et al. 2005]. These face significant questions of robustness and reliability as a basis for forecasts on such scales [Stainforth et al., 2007]. Here we propose a different approach, using observational timeseries to evaluate the sensitivity of different parts of the local climatic distribution. There are many advantages to such an approach: it avoids issues relating to model imperfections, it can be focused on decision relevant thresholds [e.g. Porter and Semenov, 2005], and it inherently integrates information relating to local climatic influences. Our approach takes timeseries of local daily temperature from specific locations and extracts the changing cumulative distribution function (cdf) over time. We use the e-obs dataset to construct such cdf-timeseries for locations across Europe. We analyse these changing cdfs using a simple mathematical deconstruction of how the difference between two observations from two different time periods can be assigned to the combination of natural variability and/or the consequences of climate change. This deconstruction facilitates an assessment of the sensitivity of different quantiles of the distributions. These sensitivities are shown to be geographically varying across Europe; as one would expect given the different influences on local climate between, say, Western Scotland and central Italy. We nevertheless find many regionally consistent patterns of response of potential value in adaptation planning. Both the methodology and a sensitivity analysis will be presented. The technique has the potential to be applied to many other variables in addition to
Evaluating the sensitivity of local temperature distributions to global climate change
NASA Astrophysics Data System (ADS)
Chapman, S. C.; Stainforth, D.; Watkins, N. W.
2012-12-01
Climate change adaptation activities takes place at regional and local scales. The sensitivity of climate to increasing greenhouse gases is, however, most often studied at the global scale [Knutti and Hegerl 2008, and references therein]. At adaptation relevant spatial scales information is most often based on simulations of complex climate models [Murphy et al. 2009, Tebaldi et al. 2005]. These face significant questions of robustness and reliability as a basis for forecasts on such scales [Stainforth et al., 2007]. Here we propose a different approach, using observational timeseries to evaluate the sensitivity of different parts of the local climatic distribution. There are many advantages to such an approach: it avoids issues relating to model imperfections, it can be focused on decision relevant thresholds [e.g. Porter and Semenov, 2005], and it inherently integrates information relating to local climatic influences. Our approach takes timeseries of local daily temperature from specific locations and extracts the changing cumulative distribution function (cdf) over time. We use the e-obs dataset to construct such cdf-timeseries for locations across Europe. We analyse these changing cdfs using a simple mathematical deconstruction of how the difference between two observations from two different time periods can be assigned to the combination of natural variability and/or the consequences of climate change. This deconstruction facilitates an assessment of the sensitivity of different quantiles of the distributions. These sensitivities are shown to be geographically varying across Europe; as one would expect given the different influences on local climate between, say, Western Scotland and central Italy. We nevertheless find many regionally consistent patterns of response of potential value in adaptation planning. Both the methodology and a sensitivity analysis will be presented. The technique has the potential to be applied to many other variables in addition to
Nursing-sensitive indicators: a concept analysis
Heslop, Liza; Lu, Sai
2014-01-01
Aim To report a concept analysis of nursing-sensitive indicators within the applied context of the acute care setting. Background The concept of ‘nursing sensitive indicators’ is valuable to elaborate nursing care performance. The conceptual foundation, theoretical role, meaning, use and interpretation of the concept tend to differ. The elusiveness of the concept and the ambiguity of its attributes may have hindered research efforts to advance its application in practice. Design Concept analysis. Data sources Using ‘clinical indicators’ or ‘quality of nursing care’ as subject headings and incorporating keyword combinations of ‘acute care’ and ‘nurs*’, CINAHL and MEDLINE with full text in EBSCOhost databases were searched for English language journal articles published between 2000–2012. Only primary research articles were selected. Methods A hybrid approach was undertaken, incorporating traditional strategies as per Walker and Avant and a conceptual matrix based on Holzemer's Outcomes Model for Health Care Research. Results The analysis revealed two main attributes of nursing-sensitive indicators. Structural attributes related to health service operation included: hours of nursing care per patient day, nurse staffing. Outcome attributes related to patient care included: the prevalence of pressure ulcer, falls and falls with injury, nosocomial selective infection and patient/family satisfaction with nursing care. Conclusion This concept analysis may be used as a basis to advance understandings of the theoretical structures that underpin both research and practical application of quality dimensions of nursing care performance. PMID:25113388
Rotary absorption heat pump sensitivity analysis
NASA Astrophysics Data System (ADS)
Bamberger, J. A.; Zalondek, F. R.
1990-03-01
Conserve Resources, Incorporated is currently developing an innovative, patented absorption heat pump. The heat pump uses rotation and thin film technology to enhance the absorption process and to provide a more efficient, compact system. The results are presented of a sensitivity analysis of the rotary absorption heat pump (RAHP) performance conducted to further the development of a 1-ton RAHP. The objective of the uncertainty analysis was to determine the sensitivity of RAHP steady state performance to uncertainties in design parameters. Prior to conducting the uncertainty analysis, a computer model was developed to describe the performance of the RAHP thermodynamic cycle. The RAHP performance is based on many interrelating factors, not all of which could be investigated during the sensitivity analysis. Confirmatory measurements of LiBr/H2O properties during absorber/generator operation will provide experimental verification that the system is operating as it was designed to operate. Quantities to be measured include: flow rate in the absorber and generator, film thickness, recirculation rate, and the effects of rotational speed on these parameters.
A climate sensitivity test using a global cloud resolving model under an aqua planet condition
NASA Astrophysics Data System (ADS)
Miura, Hiroaki; Tomita, Hirofumi; Nasuno, Tomoe; Iga, Shin-ichi; Satoh, Masaki; Matsuno, Taroh
2005-10-01
A global Cloud Resolving Model (CRM) is used in a climate sensitivity test for an aqua planet in this first attempt to evaluate climate sensitivity without cumulus parameterizations. Results from a control experiment and an experiment with global sea surface temperature (SST) warmer by 2 K are examined. Notable features in the simulation with warmer SST include a wider region of active convection, a weaker Hadley circulation, mid-tropospheric moistening in the subtropics, and more clouds in the extratropics. Negative feedback from short-wave radiation reduces the climate sensitivity parameter compared to a result in a more conventional model with a cumulus parameterization.
Global thermohaline circulation. Part 2: Sensitivity with interactive atmospheric transports
Wang, X.; Stone, P.H.; Marotzke, J.
1999-01-01
A hybrid coupled ocean-atmospheric model is used to investigate the stability of the thermohaline circulation (THC) to an increase in the surface freshwater forcing in the presence of interactive meridional transports in the atmosphere. The ocean component is the idealized global general circulation model used in Part 1. The atmospheric model assumes fixed latitudinal structure of the heat and moisture transports, and the amplitudes are calculated separately for each hemisphere from the large-scale sea surface temperature (SST) and SST gradient, using parameterizations based on baroclinic stability theory. The ocean-atmosphere heat and freshwater exchanges are calculated as residuals of the steady-state atmospheric budgets. Owing to the ocean component`s weak heat transport, the model has too strong a meridional SST gradient when driven with observed atmospheric meridional transports. When the latter are made interactive, the conveyor belt circulation collapses. A flux adjustment is introduced in which the efficiency of the atmospheric transports is lowered to match the too low efficiency of the ocean component. The feedbacks between the THC and both the atmospheric heat and moisture transports are positive, whether atmospheric transports are interactive in the Northern Hemisphere, the Southern Hemisphere, or both. However, the feedbacks operate differently in the northern and southern Hemispheres, because the Pacific THC dominates in the Southern Hemisphere, and deep water formation in the two hemispheres is negatively correlated. The feedbacks in the two hemisphere do not necessarily reinforce each other because they have opposite effects on low-latitude temperatures. The model is qualitatively similar in stability to one with conventional additive flux adjustment, but quantitatively more stable.
Global Precipitation Analysis Using Satellite Observations
NASA Technical Reports Server (NTRS)
Adler, Robert F.; Huffman, George; Curtis, Scott; Bolvin, David; Nelkin, Eric
2002-01-01
Global precipitation analysis covering the last few decades and the impact of the new TRMM (Tropical Rainfall Measuring Mission) observations are reviewed in the context of weather and climate applications. All the data sets discussed are the result of mergers of information from multiple satellites and gauges, where available. The focus of the talk is on TRMM-based 3 hr. analyses that use TRMM to calibrate polar-orbit microwave observations from SSM/I (and other satellites) and geosynchronous IR observations and merges the various calibrated observations into a final, 3 hr. resolution map. This TRMM standard product will be available for the entire TRMM period (January 1998-present) at the end of 2002. A real-time version of this merged product is being produced and is available at 0.25 deg latitude-longitude resolution over the latitude range from 50 deg N-50 deg S. Examples will be shown, including its use in monitoring flood conditions and in relating weather-scale patterns to climate-scale patterns. The 3-hourly analysis is placed in the context of two research products of the World Climate Research Program's (WCRP/GEWEX) Global Precipitation Climatology Project (GPCP). The first is the 23 year, monthly, globally complete precipitation analysis that is used to explore global and regional variations and trends and is compared to the much shorter TRMM tropical data set. The GPCP data set shows no significant global trend in precipitation over the twenty years, unlike the positive trend in global surface temperatures over the past century. Regional trends are also analyzed. A trend pattern that is a combination of both El Nino and La Nina precipitation features is evident in the Goodyear data set. This pattern is related to an increase with time in the number of combined months of El Nino and La Nina during the 23 year period. Monthly anomalies of precipitation are related to ENSO variations with clear signals extending into middle and high latitudes of both
Global Optimization and Broadband Analysis Software for Interstellar Chemistry (GOBASIC)
NASA Astrophysics Data System (ADS)
Rad, Mary L.; Zou, Luyao; Sanders, James L.; Widicus Weaver, Susanna L.
2016-01-01
Context. Broadband receivers that operate at millimeter and submillimeter frequencies necessitate the development of new tools for spectral analysis and interpretation. Simultaneous, global, multimolecule, multicomponent analysis is necessary to accurately determine the physical and chemical conditions from line-rich spectra that arise from sources like hot cores. Aims: We aim to provide a robust and efficient automated analysis program to meet the challenges presented with the large spectral datasets produced by radio telescopes. Methods: We have written a program in the MATLAB numerical computing environment for simultaneous global analysis of broadband line surveys. The Global Optimization and Broadband Analysis Software for Interstellar Chemistry (GOBASIC) program uses the simplifying assumption of local thermodynamic equilibrium (LTE) for spectral analysis to determine molecular column density, temperature, and velocity information. Results: GOBASIC achieves simultaneous, multimolecule, multicomponent fitting for broadband spectra. The number of components that can be analyzed at once is only limited by the available computational resources. Analysis of subsequent sets of molecules or components is performed iteratively while taking the previous fits into account. All features of a given molecule across the entire window are fitted at once, which is preferable to the rotation diagram approach because global analysis is less sensitive to blended features and noise features in the spectra. In addition, the fitting method used in GOBASIC is insensitive to the initial conditions chosen, the fitting is automated, and fitting can be performed in a parallel computing environment. These features make GOBASIC a valuable improvement over previously available LTE analysis methods. A copy of the sofware is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/585/A23
Estimation of global aortic pulse wave velocity by flow-sensitive 4D MRI.
Markl, Michael; Wallis, Wolf; Brendecke, Stefanie; Simon, Jan; Frydrychowicz, Alex; Harloff, Andreas
2010-06-01
The aim of this study was to determine the value of flow-sensitive four-dimensional MRI for the assessment of pulse wave velocity as a measure of vessel compliance in the thoracic aorta. Findings in 12 young healthy volunteers were compared with those in 25 stroke patients with aortic atherosclerosis and an age-matched normal control group (n = 9). Results from pulse wave velocity calculations incorporated velocity data from the entire aorta and were compared to those of standard methods based on flow waveforms at only two specific anatomic landmarks. Global aortic pulse wave velocity was higher in patients with atherosclerosis (7.03 +/- 0.24 m/sec) compared to age-matched controls (6.40 +/- 0.32 m/sec). Both were significantly (P < 0.001) increased compared to younger volunteers (4.39 +/- 0.32 m/sec). Global aortic pulse wave velocity in young volunteers was in good agreement with previously reported MRI studies and catheter measurements. Estimation of measurement inaccuracies and error propagation analysis demonstrated only minor uncertainties in measured flow waveforms and moderate relative errors below 16% for aortic compliance in all 46 subjects. These results demonstrate the feasibility of pulse wave velocity calculation based on four-dimensional MRI data by exploiting its full volumetric coverage, which may also be an advantage over standard two-dimensional techniques in the often-distorted route of the aorta in patients with atherosclerosis. PMID:20512861
Global QCD Analysis and Hadron Collider Physics
Tung, W.-K.
2005-03-22
The role of global QCD analysis of parton distribution functions (PDFs) in collider physics at the Tevatron and LHC is surveyed. Current status of PDF analyses are reviewed, emphasizing the uncertainties and the open issues. The stability of NLO QCD global analysis and its prediction on 'standard candle' W/Z cross sections at hadron colliders are discussed. The importance of the precise measurement of various W/Z cross sections at the Tevatron in advancing our knowledge of PDFs, hence in enhancing the capabilities of making significant progress in W mass and top quark parameter measurements, as well as the discovery potentials of Higgs and New Physics at the Tevatron and LHC, is emphasized.
Trends in sensitivity analysis practice in the last decade.
Ferretti, Federico; Saltelli, Andrea; Tarantola, Stefano
2016-10-15
The majority of published sensitivity analyses (SAs) are either local or one factor-at-a-time (OAT) analyses, relying on unjustified assumptions of model linearity and additivity. Global approaches to sensitivity analyses (GSA) which would obviate these shortcomings, are applied by a minority of researchers. By reviewing the academic literature on SA, we here present a bibliometric analysis of the trends of different SA practices in last decade. The review has been conducted both on some top ranking journals (Nature and Science) and through an extended analysis in the Elsevier's Scopus database of scientific publications. After correcting for the global growth in publications, the amount of papers performing a generic SA has notably increased over the last decade. Even if OAT is still the most largely used technique in SA, there is a clear increase in the use of GSA with preference respectively for regression and variance-based techniques. Even after adjusting for the growth of publications in the sole modelling field, to which SA and GSA normally apply, the trend is confirmed. Data about regions of origin and discipline are also briefly discussed. The results above are confirmed when zooming on the sole articles published in chemical modelling, a field historically proficient in the use of SA methods. PMID:26934843
The Theoretical Foundation of Sensitivity Analysis for GPS
NASA Astrophysics Data System (ADS)
Shikoska, U.; Davchev, D.; Shikoski, J.
2008-10-01
In this paper the equations of sensitivity analysis are derived and established theoretical underpinnings for the analyses. Paper propounds a land-vehicle navigation concepts and definition for sensitivity analysis. Equations of sensitivity analysis are presented for a linear Kalman filter and case study is given to illustrate the use of sensitivity analysis to the reader. At the end of the paper, extensions that are required for this research are made to the basic equations of sensitivity analysis specifically; the equations of sensitivity analysis are re-derived for a linearized Kalman filter.
A global analysis of island pyrogeography
NASA Astrophysics Data System (ADS)
Trauernicht, C.; Murphy, B. P.
2014-12-01
Islands have provided insight into the ecological role of fire worldwide through research on the positive feedbacks between fire and nonnative grasses, particularly in the Hawaiian Islands. However, the global extent and frequency of fire on islands as an ecological disturbance has received little attention, possibly because 'natural fires' on islands are typically limited to infrequent dry lightning strikes and isolated volcanic events. But because most contemporary fires on islands are anthropogenic, islands provide ideal systems with which to understand the linkages between socio-economic development, shifting fire regimes, and ecological change. Here we use the density of satellite-derived (MODIS) active fire detections for the years 2000-2014 and global data sets of vegetation, climate, population density, and road development to examine the drivers of fire activity on islands at the global scale, and compare these results to existing pyrogeographic models derived from continental data sets. We also use the Hawaiian Islands as a case study to understand the extent to which novel fire regimes can pervade island ecosystems. The global analysis indicates that fire is a frequent disturbance across islands worldwide, strongly affected by human activities, indicating people can more readily override climatic drivers than on continental land masses. The extent of fire activity derived from local records in the Hawaiian Islands reveals that our global analysis likely underestimates the prevalence of fire among island systems and that the combined effects of human activity and invasion by nonnative grasses can create conditions for frequent and relatively large-scale fires. Understanding the extent of these novel fire regimes, and mitigating their impacts, is critical to reducing the current and rapid degradation of native island ecosystems worldwide.
Simple Sensitivity Analysis for Orion GNC
NASA Technical Reports Server (NTRS)
Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar
2013-01-01
The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch.We describe in this paper a sensitivity analysis tool (Critical Factors Tool or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.
Bayesian sensitivity analysis of bifurcating nonlinear models
NASA Astrophysics Data System (ADS)
Becker, W.; Worden, K.; Rowson, J.
2013-01-01
Sensitivity analysis allows one to investigate how changes in input parameters to a system affect the output. When computational expense is a concern, metamodels such as Gaussian processes can offer considerable computational savings over Monte Carlo methods, albeit at the expense of introducing a data modelling problem. In particular, Gaussian processes assume a smooth, non-bifurcating response surface. This work highlights a recent extension to Gaussian processes which uses a decision tree to partition the input space into homogeneous regions, and then fits separate Gaussian processes to each region. In this way, bifurcations can be modelled at region boundaries and different regions can have different covariance properties. To test this method, both the treed and standard methods were applied to the bifurcating response of a Duffing oscillator and a bifurcating FE model of a heart valve. It was found that the treed Gaussian process provides a practical way of performing uncertainty and sensitivity analysis on large, potentially-bifurcating models, which cannot be dealt with by using a single GP, although an open problem remains how to manage bifurcation boundaries that are not parallel to coordinate axes.
A Post-Monte-Carlo Sensitivity Analysis Code
2000-04-04
SATOOL (Sensitivity Analysis TOOL) is a code for sensitivity analysis, following an uncertainity analysis with Monte Carlo simulations. Sensitivity analysis identifies those input variables, whose variance contributes dominatly to the variance in the output. This analysis can be used to reduce the variance in the output variables by redefining the "sensitive" variables with greater precision, i.e. with lower variance. The code identifies a group of sensitive variables, ranks them in the order of importance andmore » also quantifies the relative importance among the sensitive variables.« less
HYDROLOGIC SENSITIVITIES OF THE SACRAMENTO-SAN JOAQUIN RIVER BASIN, CA TO GLOBAL WARMING
The hydrologic sensitivities of four medium-sized mountainous catchments in the Sacramento and San Joaquin River basins to long-term global warming were analyzed. he hydrologic response of these catchments, all of which are dominated by spring snowmelt runoff, were simulated by t...
Long Trajectory for the Development of Sensitivity to Global and Biological Motion
ERIC Educational Resources Information Center
Hadad, Bat-Sheva; Maurer, Daphne; Lewis, Terri L.
2011-01-01
We used a staircase procedure to test sensitivity to (1) global motion in random-dot kinematograms moving at 4 degrees and 18 degrees s[superscript -1] and (2) biological motion. Thresholds were defined as (1) the minimum percentage of signal dots (i.e. the maximum percentage of noise dots) necessary for accurate discrimination of upward versus…
Toward a Globally Sensitive Definition of Inclusive Education Based in Social Justice
ERIC Educational Resources Information Center
Shyman, Eric
2015-01-01
While many policies, pieces of legislation and educational discourse focus on the concept of inclusion, or inclusive education, the field of education as a whole lacks a clear, precise and comprehensive definition that is both globally sensitive and based in social justice. Even international efforts including the UN Convention on the Rights of…
Scalable analysis tools for sensitivity analysis and UQ (3160) results.
Karelitz, David B.; Ice, Lisa G.; Thompson, David C.; Bennett, Janine C.; Fabian, Nathan; Scott, W. Alan; Moreland, Kenneth D.
2009-09-01
The 9/30/2009 ASC Level 2 Scalable Analysis Tools for Sensitivity Analysis and UQ (Milestone 3160) contains feature recognition capability required by the user community for certain verification and validation tasks focused around sensitivity analysis and uncertainty quantification (UQ). These feature recognition capabilities include crater detection, characterization, and analysis from CTH simulation data; the ability to call fragment and crater identification code from within a CTH simulation; and the ability to output fragments in a geometric format that includes data values over the fragments. The feature recognition capabilities were tested extensively on sample and actual simulations. In addition, a number of stretch criteria were met including the ability to visualize CTH tracer particles and the ability to visualize output from within an S3D simulation.
Updated Chemical Kinetics and Sensitivity Analysis Code
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan
2005-01-01
An updated version of the General Chemical Kinetics and Sensitivity Analysis (LSENS) computer code has become available. A prior version of LSENS was described in "Program Helps to Determine Chemical-Reaction Mechanisms" (LEW-15758), NASA Tech Briefs, Vol. 19, No. 5 (May 1995), page 66. To recapitulate: LSENS solves complex, homogeneous, gas-phase, chemical-kinetics problems (e.g., combustion of fuels) that are represented by sets of many coupled, nonlinear, first-order ordinary differential equations. LSENS has been designed for flexibility, convenience, and computational efficiency. The present version of LSENS incorporates mathematical models for (1) a static system; (2) steady, one-dimensional inviscid flow; (3) reaction behind an incident shock wave, including boundary layer correction; (4) a perfectly stirred reactor; and (5) a perfectly stirred reactor followed by a plug-flow reactor. In addition, LSENS can compute equilibrium properties for the following assigned states: enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. For static and one-dimensional-flow problems, including those behind an incident shock wave and following a perfectly stirred reactor calculation, LSENS can compute sensitivity coefficients of dependent variables and their derivatives, with respect to the initial values of dependent variables and/or the rate-coefficient parameters of the chemical reactions.
Global meta-analysis of transcriptomics studies.
Caldas, José; Vinga, Susana
2014-01-01
Transcriptomics meta-analysis aims at re-using existing data to derive novel biological hypotheses, and is motivated by the public availability of a large number of independent studies. Current methods are based on breaking down studies into multiple comparisons between phenotypes (e.g. disease vs. healthy), based on the studies' experimental designs, followed by computing the overlap between the resulting differential expression signatures. While useful, in this methodology each study yields multiple independent phenotype comparisons, and connections are established not between studies, but rather between subsets of the studies corresponding to phenotype comparisons. We propose a rank-based statistical meta-analysis framework that establishes global connections between transcriptomics studies without breaking down studies into sets of phenotype comparisons. By using a rank product method, our framework extracts global features from each study, corresponding to genes that are consistently among the most expressed or differentially expressed genes in that study. Those features are then statistically modelled via a term-frequency inverse-document frequency (TF-IDF) model, which is then used for connecting studies. Our framework is fast and parameter-free; when applied to large collections of Homo sapiens and Streptococcus pneumoniae transcriptomics studies, it performs better than similarity-based approaches in retrieving related studies, using a Medical Subject Headings gold standard. Finally, we highlight via case studies how the framework can be used to derive novel biological hypotheses regarding related studies and the genes that drive those connections. Our proposed statistical framework shows that it is possible to perform a meta-analysis of transcriptomics studies with arbitrary experimental designs by deriving global expression features rather than decomposing studies into multiple phenotype comparisons. PMID:24586684
Toward the globalization of behavior analysis
Malott, Maria E.
2004-01-01
Globalization could facilitate the long-term growth of behavior analysis, and although progress has been made, much yet needs to be done. Given the scarcity of resources, it is suggested that we draw from successes in the development of behavior analysis and establish behavioral programs around the world that embrace research, education, and practice as a focus of systematic globalization efforts. The strategy would require the implementation of cultural contingencies that support initiation and long-term program expansion. For program initiation, contingencies are needed to place pioneer behavior analysts in university units that would be unlikely to start a behavioral program otherwise. The task of these pioneers would be to build a critical mass that would multiply behavior-analytic repertoires, obtain research funding, conduct publishable research, and establish applied settings. For long-term program development, the field should expand internationally as it continues building the infrastructure needed to accelerate the demand for behavioral programs in higher education, scholarly work in behavior analysis, behavior analysts in existing jobs, and behavioral technology in the market place. ImagesFigure 1Figure 2 PMID:22478413
The global analysis of DEER data
Brandon, Suzanne; Beth, Albert H.; Hustedt, Eric J.
2012-01-01
Double Electron–Electron Resonance (DEER) has emerged as a powerful technique for measuring long range distances and distance distributions between paramagnetic centers in biomolecules. This information can then be used to characterize functionally relevant structural and dynamic properties of biological molecules and their macromolecular assemblies. Approaches have been developed for analyzing experimental data from standard four-pulse DEER experiments to extract distance distributions. However, these methods typically use an a priori baseline correction to account for background signals. In the current work an approach is described for direct fitting of the DEER signal using a model for the distance distribution which permits a rigorous error analysis of the fitting parameters. Moreover, this approach does not require a priori background correction of the experimental data and can take into account excluded volume effects on the background signal when necessary. The global analysis of multiple DEER data sets is also demonstrated. Global analysis has the potential to provide new capabilities for extracting distance distributions and additional structural parameters in a wide range of studies. PMID:22578560
SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool
Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda
2008-01-01
Background It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. Results This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. Conclusion SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes. PMID:18706080
Multicomponent dynamical nucleation theory and sensitivity analysis.
Kathmann, Shawn M; Schenter, Gregory K; Garrett, Bruce C
2004-05-15
Vapor to liquid multicomponent nucleation is a dynamical process governed by a delicate interplay between condensation and evaporation. Since the population of the vapor phase is dominated by monomers at reasonable supersaturations, the formation of clusters is governed by monomer association and dissociation reactions. Although there is no intrinsic barrier in the interaction potential along the minimum energy path for the association process, the formation of a cluster is impeded by a free energy barrier. Dynamical nucleation theory provides a framework in which equilibrium evaporation rate constants can be calculated and the corresponding condensation rate constants determined from detailed balance. The nucleation rate can then be obtained by solving the kinetic equations. The rate constants governing the multistep kinetics of multicomponent nucleation including sensitivity analysis and the potential influence of contaminants will be presented and discussed. PMID:15267849
Sensitivity analysis of periodic matrix population models.
Caswell, Hal; Shyu, Esther
2012-12-01
Periodic matrix models are frequently used to describe cyclic temporal variation (seasonal or interannual) and to account for the operation of multiple processes (e.g., demography and dispersal) within a single projection interval. In either case, the models take the form of periodic matrix products. The perturbation analysis of periodic models must trace the effects of parameter changes, at each phase of the cycle, on output variables that are calculated over the entire cycle. Here, we apply matrix calculus to obtain the sensitivity and elasticity of scalar-, vector-, or matrix-valued output variables. We apply the method to linear models for periodic environments (including seasonal harvest models), to vec-permutation models in which individuals are classified by multiple criteria, and to nonlinear models including both immediate and delayed density dependence. The results can be used to evaluate management strategies and to study selection gradients in periodic environments. PMID:23316494
Global analysis of the phase calibration operation
NASA Astrophysics Data System (ADS)
Lannes, André
2005-04-01
A global approach to phase calibration is presented. The corresponding theoretical framework calls on elementary concepts of algebraic graph theory (spanning tree of maximal weight, cycles) and algebraic number theory (lattice, nearest lattice point). The traditional approach can thereby be better understood. In radio imaging and in optical interferometry, the self-calibration procedures must often be conducted with much care. The analysis presented should then help in finding a better compromise between the coverage of the calibration graph (which must be as complete as possible) and the quality of the solution (which must of course be reliable).
Global analysis of the phase calibration operation.
Lannes, André
2005-04-01
A global approach to phase calibration is presented. The corresponding theoretical framework calls on elementary concepts of algebraic graph theory (spanning tree of maximal weight, cycles) and algebraic number theory (lattice, nearest lattice point). The traditional approach can thereby be better understood. In radio imaging and in optical interferometry, the self-calibration procedures must often be conducted with much care. The analysis presented should then help in finding a better compromise between the coverage of the calibration graph (which must be as complete as possible) and the quality of the solution (which must of course be reliable). PMID:15839277
Global QCD Analysis of Polarized Parton Densities
Stratmann, Marco
2009-08-04
We focus on some highlights of a recent, first global Quantum Chromodynamics (QCD) analysis of the helicity parton distributions of the nucleon, mainly the evidence for a rather small gluon polarization over a limited region of momentum fraction and for interesting flavor patterns in the polarized sea. It is examined how the various sets of data obtained in inclusive and semi-inclusive deep inelastic scattering and polarized proton-proton collisions help to constrain different aspects of the quark, antiquark, and gluon helicity distributions. Uncertainty estimates are performed using both the robust Lagrange multiplier technique and the standard Hessian approach.
Sensitivity analysis of distributed volcanic source inversion
NASA Astrophysics Data System (ADS)
Cannavo', Flavio; Camacho, Antonio G.; González, Pablo J.; Puglisi, Giuseppe; Fernández, José
2016-04-01
A recently proposed algorithm (Camacho et al., 2011) claims to rapidly estimate magmatic sources from surface geodetic data without any a priori assumption about source geometry. The algorithm takes the advantages of fast calculation from the analytical models and adds the capability to model free-shape distributed sources. Assuming homogenous elastic conditions, the approach can determine general geometrical configurations of pressured and/or density source and/or sliding structures corresponding to prescribed values of anomalous density, pressure and slip. These source bodies are described as aggregation of elemental point sources for pressure, density and slip, and they fit the whole data (keeping some 3D regularity conditions). Although some examples and applications have been already presented to demonstrate the ability of the algorithm in reconstructing a magma pressure source (e.g. Camacho et al., 2011,Cannavò et al., 2015), a systematic analysis of sensitivity and reliability of the algorithm is still lacking. In this explorative work we present results from a large statistical test designed to evaluate the advantages and limitations of the methodology by assessing its sensitivity to the free and constrained parameters involved in inversions. In particular, besides the source parameters, we focused on the ground deformation network topology, and noise in measurements. The proposed analysis can be used for a better interpretation of the algorithm results in real-case applications. Camacho, A. G., González, P. J., Fernández, J. & Berrino, G. (2011) Simultaneous inversion of surface deformation and gravity changes by means of extended bodies with a free geometry: Application to deforming calderas. J. Geophys. Res. 116. Cannavò F., Camacho A.G., González P.J., Mattia M., Puglisi G., Fernández J. (2015) Real Time Tracking of Magmatic Intrusions by means of Ground Deformation Modeling during Volcanic Crises, Scientific Reports, 5 (10970) doi:10.1038/srep
On computational schemes for global-local stress analysis
NASA Technical Reports Server (NTRS)
Reddy, J. N.
1989-01-01
An overview is given of global-local stress analysis methods and associated difficulties and recommendations for future research. The phrase global-local analysis is understood to be an analysis in which some parts of the domain or structure are identified, for reasons of accurate determination of stresses and displacements or for more refined analysis than in the remaining parts. The parts of refined analysis are termed local and the remaining parts are called global. Typically local regions are small in size compared to global regions, while the computational effort can be larger in local regions than in global regions.
Longitudinal Genetic Analysis of Anxiety Sensitivity
ERIC Educational Resources Information Center
Zavos, Helena M. S.; Gregory, Alice M.; Eley, Thalia C.
2012-01-01
Anxiety sensitivity is associated with both anxiety and depression and has been shown to be heritable. Little, however, is known about the role of genetic influence on continuity and change of symptoms over time. The authors' aim was to examine the stability of anxiety sensitivity during adolescence. By using a genetically sensitive design, the…
Tsunamis: Global Exposure and Local Risk Analysis
NASA Astrophysics Data System (ADS)
Harbitz, C. B.; Løvholt, F.; Glimsdal, S.; Horspool, N.; Griffin, J.; Davies, G.; Frauenfelder, R.
2014-12-01
The 2004 Indian Ocean tsunami led to a better understanding of the likelihood of tsunami occurrence and potential tsunami inundation, and the Hyogo Framework for Action (HFA) was one direct result of this event. The United Nations International Strategy for Disaster Risk Reduction (UN-ISDR) adopted HFA in January 2005 in order to reduce disaster risk. As an instrument to compare the risk due to different natural hazards, an integrated worldwide study was implemented and published in several Global Assessment Reports (GAR) by UN-ISDR. The results of the global earthquake induced tsunami hazard and exposure analysis for a return period of 500 years are presented. Both deterministic and probabilistic methods (PTHA) are used. The resulting hazard levels for both methods are compared quantitatively for selected areas. The comparison demonstrates that the analysis is rather rough, which is expected for a study aiming at average trends on a country level across the globe. It is shown that populous Asian countries account for the largest absolute number of people living in tsunami prone areas, more than 50% of the total exposed people live in Japan. Smaller nations like Macao and the Maldives are among the most exposed by population count. Exposed nuclear power plants are limited to Japan, China, India, Taiwan, and USA. On the contrary, a local tsunami vulnerability and risk analysis applies information on population, building types, infrastructure, inundation, flow depth for a certain tsunami scenario with a corresponding return period combined with empirical data on tsunami damages and mortality. Results and validation of a GIS tsunami vulnerability and risk assessment model are presented. The GIS model is adapted for optimal use of data available for each study. Finally, the importance of including landslide sources in the tsunami analysis is also discussed.
Sensitivity Analysis of Wing Aeroelastic Responses
NASA Technical Reports Server (NTRS)
Issac, Jason Cherian
1995-01-01
Design for prevention of aeroelastic instability (that is, the critical speeds leading to aeroelastic instability lie outside the operating range) is an integral part of the wing design process. Availability of the sensitivity derivatives of the various critical speeds with respect to shape parameters of the wing could be very useful to a designer in the initial design phase, when several design changes are made and the shape of the final configuration is not yet frozen. These derivatives are also indispensable for a gradient-based optimization with aeroelastic constraints. In this study, flutter characteristic of a typical section in subsonic compressible flow is examined using a state-space unsteady aerodynamic representation. The sensitivity of the flutter speed of the typical section with respect to its mass and stiffness parameters, namely, mass ratio, static unbalance, radius of gyration, bending frequency, and torsional frequency is calculated analytically. A strip theory formulation is newly developed to represent the unsteady aerodynamic forces on a wing. This is coupled with an equivalent plate structural model and solved as an eigenvalue problem to determine the critical speed of the wing. Flutter analysis of the wing is also carried out using a lifting-surface subsonic kernel function aerodynamic theory (FAST) and an equivalent plate structural model. Finite element modeling of the wing is done using NASTRAN so that wing structures made of spars and ribs and top and bottom wing skins could be analyzed. The free vibration modes of the wing obtained from NASTRAN are input into FAST to compute the flutter speed. An equivalent plate model which incorporates first-order shear deformation theory is then examined so it can be used to model thick wings, where shear deformations are important. The sensitivity of natural frequencies to changes in shape parameters is obtained using ADIFOR. A simple optimization effort is made towards obtaining a minimum weight
Assessing flood risk at the global scale: model setup, results, and sensitivity
NASA Astrophysics Data System (ADS)
Ward, Philip J.; Jongman, Brenden; Sperna Weiland, Frederiek; Bouwman, Arno; van Beek, Rens; Bierkens, Marc F. P.; Ligtvoet, Willem; Winsemius, Hessel C.
2013-12-01
Globally, economic losses from flooding exceeded 19 billion in 2012, and are rising rapidly. Hence, there is an increasing need for global-scale flood risk assessments, also within the context of integrated global assessments. We have developed and validated a model cascade for producing global flood risk maps, based on numerous flood return-periods. Validation results indicate that the model simulates interannual fluctuations in flood impacts well. The cascade involves: hydrological and hydraulic modelling; extreme value statistics; inundation modelling; flood impact modelling; and estimating annual expected impacts. The initial results estimate global impacts for several indicators, for example annual expected exposed population (169 million); and annual expected exposed GDP (1383 billion). These results are relatively insensitive to the extreme value distribution employed to estimate low frequency flood volumes. However, they are extremely sensitive to the assumed flood protection standard; developing a database of such standards should be a research priority. Also, results are sensitive to the use of two different climate forcing datasets. The impact model can easily accommodate new, user-defined, impact indicators. We envisage several applications, for example: identifying risk hotspots; calculating macro-scale risk for the insurance industry and large companies; and assessing potential benefits (and costs) of adaptation measures.
Global climate sensitivity derived from ~784,000 years of SST data
NASA Astrophysics Data System (ADS)
Friedrich, T.; Timmermann, A.; Tigchelaar, M.; Elison Timm, O.; Ganopolski, A.
2015-12-01
Global mean temperatures will increase in response to future increasing greenhouse gas concentrations. The magnitude of this warming for a given radiative forcing is still subject of debate. Here we provide estimates for the equilibrium climate sensitivity using paleo-proxy and modeling data from the last eight glacial cycles (~784,000 years). First of all, two reconstructions of globally averaged surface air temperature (SAT) for the last eight glacial cycles are obtained from two independent sources: one mainly based on a transient model simulation, the other one derived from paleo- SST records and SST network/global SAT scaling factors. Both reconstructions exhibit very good agreement in both amplitude and timing of past SAT variations. In the second step, we calculate the radiative forcings associated with greenhouse gas concentrations, dust concentrations, and surface albedo changes for the last 784, 000 years. The equilibrium climate sensitivity is then derived from the ratio of the SAT anomalies and the radiative forcing changes. Our results reveal that this estimate of the Charney climate sensitivity is a function of the background climate with substantially higher values for warmer climates. Warm phases exhibit an equilibrium climate sensitivity of ~3.70 K per CO2-doubling - more than twice the value derived for cold phases (~1.40 K per 2xCO2). We will show that the current CMIP5 ensemble-mean projection of global warming during the 21st century is supported by our estimate of climate sensitivity derived from climate paleo data of the past 784,000 years.
Wear-Out Sensitivity Analysis Project Abstract
NASA Technical Reports Server (NTRS)
Harris, Adam
2015-01-01
During the course of the Summer 2015 internship session, I worked in the Reliability and Maintainability group of the ISS Safety and Mission Assurance department. My project was a statistical analysis of how sensitive ORU's (Orbital Replacement Units) are to a reliability parameter called the wear-out characteristic. The intended goal of this was to determine a worst case scenario of how many spares would be needed if multiple systems started exhibiting wear-out characteristics simultaneously. The goal was also to determine which parts would be most likely to do so. In order to do this, my duties were to take historical data of operational times and failure times of these ORU's and use them to build predictive models of failure using probability distribution functions, mainly the Weibull distribution. Then, I ran Monte Carlo Simulations to see how an entire population of these components would perform. From here, my final duty was to vary the wear-out characteristic from the intrinsic value, to extremely high wear-out values and determine how much the probability of sufficiency of the population would shift. This was done for around 30 different ORU populations on board the ISS.
Sensitivity analysis of volume scattering phase functions.
Tuchow, Noah; Broughton, Jennifer; Kudela, Raphael
2016-08-01
To solve the radiative transfer equation and relate inherent optical properties (IOPs) to apparent optical properties (AOPs), knowledge of the volume scattering phase function is required. Due to the difficulty of measuring the phase function, it is frequently approximated. We explore the sensitivity of derived AOPs to the phase function parameterization, and compare measured and modeled values of both the AOPs and estimated phase functions using data from Monterey Bay, California during an extreme "red tide" bloom event. Using in situ measurements of absorption and attenuation coefficients, as well as two sets of measurements of the volume scattering function (VSF), we compared output from the Hydrolight radiative transfer model to direct measurements. We found that several common assumptions used in parameterizing the radiative transfer model consistently introduced overestimates of modeled versus measured remote-sensing reflectance values. Phase functions from VSF data derived from measurements at multiple wavelengths and a single scattering single angle significantly overestimated reflectances when using the manufacturer-supplied corrections, but were substantially improved using newly published corrections; phase functions calculated from VSF measurements using three angles and three wavelengths and processed using manufacture-supplied corrections were comparable, demonstrating that reasonable predictions can be made using two commercially available instruments. While other studies have reached similar conclusions, our work extends the analysis to coastal waters dominated by an extreme algal bloom with surface chlorophyll concentrations in excess of 100 mg m^{-3}. PMID:27505819
Tilt-Sensitivity Analysis for Space Telescopes
NASA Technical Reports Server (NTRS)
Papalexandris, Miltiadis; Waluschka, Eugene
2003-01-01
A report discusses a computational-simulation study of phase-front propagation in the Laser Interferometer Space Antenna (LISA), in which space telescopes would transmit and receive metrological laser beams along 5-Gm interferometer arms. The main objective of the study was to determine the sensitivity of the average phase of a beam with respect to fluctuations in pointing of the beam. The simulations account for the effects of obscurations by a secondary mirror and its supporting struts in a telescope, and for the effects of optical imperfections (especially tilt) of a telescope. A significant innovation introduced in this study is a methodology, applicable to space telescopes in general, for predicting the effects of optical imperfections. This methodology involves a Monte Carlo simulation in which one generates many random wavefront distortions and studies their effects through computational simulations of propagation. Then one performs a statistical analysis of the results of the simulations and computes the functional relations among such important design parameters as the sizes of distortions and the mean value and the variance of the loss of performance. These functional relations provide information regarding position and orientation tolerances relevant to design and operation.
Regional Fast Cloud Feedback Assessment As Constraint on Global Climate Sensitivity
NASA Astrophysics Data System (ADS)
Quaas, J.; Kuehne, P.; Block, K.; Salzmann, M.
2014-12-01
Uncertainty in climate sensitivity estimates from models relates to inter-model spread in cloud-climate feedback. A sizeable component of the cloud-climate feedback is due to fast adjustments to altered CO2 profiles. This suggests the emerging large-domain season-long cloud-resolving simulations might become useful as reference simulation when performing sensitivity simulations with doubled CO2 concentrations. We assessed the fast cloud feedback in the CMIP5 multi-model ensemble of general circulation models (GCM) to find that in the chosen example region of Central Europe the fast cloud feedback behaves similarly as it does over global land areas in indivual models, yet shows a large inter-model scatter. This result is discussed with respect to the question whether a regional high-resolved model might be suitable to constrain global cloud feedbacks.
Sensitivity analysis of hydrodynamic stability operators
NASA Technical Reports Server (NTRS)
Schmid, Peter J.; Henningson, Dan S.; Khorrami, Mehdi R.; Malik, Mujeeb R.
1992-01-01
The eigenvalue sensitivity for hydrodynamic stability operators is investigated. Classical matrix perturbation techniques as well as the concept of epsilon-pseudoeigenvalues are applied to show that parts of the spectrum are highly sensitive to small perturbations. Applications are drawn from incompressible plane Couette, trailing line vortex flow and compressible Blasius boundary layer flow. Parametric studies indicate a monotonically increasing effect of the Reynolds number on the sensitivity. The phenomenon of eigenvalue sensitivity is due to the non-normality of the operators and their discrete matrix analogs and may be associated with large transient growth of the corresponding initial value problem.
Defining a fire year for reporting and analysis of global interannual fire variability
NASA Astrophysics Data System (ADS)
Boschetti, Luigi; Roy, David P.
2008-09-01
The interannual variability of fire activity has been studied without an explicit investigation of a suitable starting month for yearly calculations. Sensitivity analysis of 37 months of global MODIS active fire detections indicates that a 1-month change in the start of the fire year definition can lead, in the worst case, to a difference of over 6% and over 45% in global and subcontinental scale annual fire totals, respectively. Optimal starting months for analyses of global and subcontinental fire interannual variability are described. The research indicates that a fire year starting in March provides an optimal definition for annual global fire activity.
Global Analysis of Posttranslational Protein Arginylation
Rai, Reena; Bailey, Aaron O; Yates, John R; Wolf, Yuri I; Zebroski, Henry; Kashina, Anna
2007-01-01
Posttranslational arginylation is critical for embryogenesis, cardiovascular development, and angiogenesis, but its molecular effects and the identity of proteins arginylated in vivo are largely unknown. Here we report a global analysis of this modification on the protein level and identification of 43 proteins arginylated in vivo on highly specific sites. Our data demonstrate that unlike previously believed, arginylation can occur on any N-terminally exposed residue likely defined by a structural recognition motif on the protein surface, and that it preferentially affects a number of physiological systems, including cytoskeleton and primary metabolic pathways. The results of our study suggest that protein arginylation is a general mechanism for regulation of protein structure and function and outline the potential role of protein arginylation in cell metabolism and embryonic development. PMID:17896865
NASA Technical Reports Server (NTRS)
Liu, Hongyu; Crawford, James H.; Considine, David B.; Platnick, Steven E.; Norris, Peter M.; Duncan, Bryan N.; Pierce, Robert B.; Chen, Gao; Yantosca, Robert M.
2008-01-01
As a follow-up study to our recent assessment of the radiative effects of clouds on tropospheric chemistry, this paper presents an analysis of the sensitivity of such effects to cloud vertical distributions and optical properties in a global 3-D chemical transport model (GEOS4-Chem CTM). GEOS-Chem was driven with a series of meteorological archives (GEOS1-STRAT, GEOS-3 and GEOS-4) generated by the NASA Goddard Earth Observing System data assimilation system, which have significantly different cloud optical depths (CODs) and vertical distributions. Clouds in GEOS1-STRAT and GEOS-3 have more similar vertical distributions while those in GEOS-4 are optically much thinner in the tropical upper troposphere. We find that the radiative impact of clouds on global photolysis frequencies and hydroxyl radical (OH) is more sensitive to the vertical distribution of clouds than to the magnitude of column CODs. Model simulations with each of the three cloud distributions all show that the change in the global burden of O3 due to clouds is less than 5%. Model perturbation experiments with GEOS-3, where the magnitude of 3-D CODs are progressively varied by -100% to 100%, predict only modest changes (<5%) in global mean OH concentrations. J(O1D), J(NO2) and OH concentrations show the strongest sensitivity for small CODs and become insensitive at large CODs due to saturation effects. Caution should be exercised not to use in photochemical models a value for cloud single scattering albedo lower than about 0.999 in order to be consistent with the current knowledge of cloud absorption at the UV wavelength. Our results have important implications for model intercomparisons and climate feedback on tropospheric photochemistry.
Adjoint sensitivity analysis of hydrodynamic stability in cyclonic flows
NASA Astrophysics Data System (ADS)
Guzman Inigo, Juan; Juniper, Matthew
2015-11-01
Cyclonic separators are used in a variety of industries to efficiently separate mixtures of fluid and solid phases by means of centrifugal forces and gravity. In certain circumstances, the vortex core of cyclonic flows is known to precess due to the instability of the flow, which leads to performance reductions. We aim to characterize the unsteadiness using linear stability analysis of the Reynolds Averaged Navier-Stokes (RANS) equations in a global framework. The system of equations, including the turbulence model, is linearised to obtain an eigenvalue problem. Unstable modes corresponding to the dynamics of the large structures of the turbulent flow are extracted. The analysis shows that the most unstable mode is a helical motion which develops around the axis of the flow. This result is in good agreement with LES and experimental analysis, suggesting the validity of the approach. Finally, an adjoint-based sensitivity analysis is performed to determine the regions of the flow that, when altered, have most influence on the frequency and growth-rate of the unstable eigenvalues.
Sensitivity Studies for Space-Based Global Measurements of Atmospheric Carbon Dioxide
NASA Technical Reports Server (NTRS)
Mao, Jian-Ping; Kawa, S. Randolph; Bhartia, P. K. (Technical Monitor)
2001-01-01
Carbon dioxide (CO2) is well known as the primary forcing agent of global warming. Although the climate forcing due to CO2 is well known, the sources and sinks of CO2 are not well understood. Currently the lack of global atmospheric CO2 observations limits our ability to diagnose the global carbon budget (e.g., finding the so-called "missing sink") and thus limits our ability to understand past climate change and predict future climate response. Space-based techniques are being developed to make high-resolution and high-precision global column CO2 measurements. One of the proposed techniques utilizes the passive remote sensing of Earth's reflected solar radiation at the weaker vibration-rotation band of CO2 in the near infrared (approx. 1.57 micron). We use a line-by-line radiative transfer model to explore the potential of this method. Results of sensitivity studies for CO2 concentration variation and geophysical conditions (i.e., atmospheric temperature, surface reflectivity, solar zenith angle, aerosol, and cirrus cloud) will be presented. We will also present sensitivity results for an O2 A-band (approx. 0.76 micron) sensor that will be needed along with CO2 to make surface pressure and cloud height measurements.
A New Computationally Frugal Method For Sensitivity Analysis Of Environmental Models
NASA Astrophysics Data System (ADS)
Rakovec, O.; Hill, M. C.; Clark, M. P.; Weerts, A.; Teuling, R.; Borgonovo, E.; Uijlenhoet, R.
2013-12-01
Effective and efficient parameter sensitivity analysis methods are crucial to understand the behaviour of complex environmental models and use of models in risk assessment. This paper proposes a new computationally frugal method for analyzing parameter sensitivity: the Distributed Evaluation of Local Sensitivity Analysis (DELSA). The DELSA method can be considered a hybrid of local and global methods, and focuses explicitly on multiscale evaluation of parameter sensitivity across the parameter space. Results of the DELSA method are compared with the popular global, variance-based Sobol' method and the delta method. We assess the parameter sensitivity of both (1) a simple non-linear reservoir model with only two parameters, and (2) five different "bucket-style" hydrologic models applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both the synthetic and real-world examples, the global Sobol' method and the DELSA method provide similar sensitivities, with the DELSA method providing more detailed insight at much lower computational cost. The ability to understand how sensitivity measures vary through parameter space with modest computational requirements provides exciting new opportunities.
NASA Astrophysics Data System (ADS)
Razavi, S.; Gupta, H. V.
2014-12-01
Sensitivity analysis (SA) is an important paradigm in the context of Earth System model development and application, and provides a powerful tool that serves several essential functions in modelling practice, including 1) Uncertainty Apportionment - attribution of total uncertainty to different uncertainty sources, 2) Assessment of Similarity - diagnostic testing and evaluation of similarities between the functioning of the model and the real system, 3) Factor and Model Reduction - identification of non-influential factors and/or insensitive components of model structure, and 4) Factor Interdependence - investigation of the nature and strength of interactions between the factors, and the degree to which factors intensify, cancel, or compensate for the effects of each other. A variety of sensitivity analysis approaches have been proposed, each of which formally characterizes a different "intuitive" understanding of what is meant by the "sensitivity" of one or more model responses to its dependent factors (such as model parameters or forcings). These approaches are based on different philosophies and theoretical definitions of sensitivity, and range from simple local derivatives and one-factor-at-a-time procedures to rigorous variance-based (Sobol-type) approaches. In general, each approach focuses on, and identifies, different features and properties of the model response and may therefore lead to different (even conflicting) conclusions about the underlying sensitivity. This presentation revisits the theoretical basis for sensitivity analysis, and critically evaluates existing approaches so as to demonstrate their flaws and shortcomings. With this background, we discuss several important properties of response surfaces that are associated with the understanding and interpretation of sensitivity. Finally, a new approach towards global sensitivity assessment is developed that is consistent with important properties of Earth System model response surfaces.
Sensitivity Analysis of a process based erosion model using FAST
NASA Astrophysics Data System (ADS)
Gabelmann, Petra; Wienhöfer, Jan; Zehe, Erwin
2015-04-01
deposition are related to overland flow velocity using the equation of Engelund and Hansen and the sinking velocity of grain sizes, respectively. The sensitivity analysis was performed based on virtual hillslopes similar to those in the Weiherbach catchment. We applied the FAST-method (Fourier Amplitude Sensitivity Test), which provides a global sensitivity analysis with comparably few model runs. We varied model parameters in predefined and, for the Weiherbach catchment, physically meaningful parameter ranges. Those parameters included rainfall intensity, surface roughness, hillslope geometry, land use, erosion resistance, and soil hydraulic parameters. The results of this study allow guiding further modelling efforts in the Weiherbach catchment with respect to data collection and model modification.
Sensitivity analysis of textural parameters for vertebroplasty
NASA Astrophysics Data System (ADS)
Tack, Gye Rae; Lee, Seung Y.; Shin, Kyu-Chul; Lee, Sung J.
2002-05-01
Vertebroplasty is one of the newest surgical approaches for the treatment of the osteoporotic spine. Recent studies have shown that it is a minimally invasive, safe, promising procedure for patients with osteoporotic fractures while providing structural reinforcement of the osteoporotic vertebrae as well as immediate pain relief. However, treatment failures due to excessive bone cement injection have been reported as one of complications. It is believed that control of bone cement volume seems to be one of the most critical factors in preventing complications. We believed that an optimal bone cement volume could be assessed based on CT data of a patient. Gray-level run length analysis was used to extract textural information of the trabecular. At initial stage of the project, four indices were used to represent the textural information: mean width of intertrabecular space, mean width of trabecular, area of intertrabecular space, and area of trabecular. Finally, the area of intertrabecular space was selected as a parameter to estimate an optimal bone cement volume and it was found that there was a strong linear relationship between these 2 variables (correlation coefficient = 0.9433, standard deviation = 0.0246). In this study, we examined several factors affecting overall procedures. The threshold level, the radius of rolling ball and the size of region of interest were selected for the sensitivity analysis. As the level of threshold varied with 9, 10, and 11, the correlation coefficient varied from 0.9123 to 0.9534. As the radius of rolling ball varied with 45, 50, and 55, the correlation coefficient varied from 0.9265 to 0.9730. As the size of region of interest varied with 58 x 58, 64 x 64, and 70 x 70, the correlation coefficient varied from 0.9685 to 0.9468. Finally, we found that strong correlation between actual bone cement volume (Y) and the area (X) of the intertrabecular space calculated from the binary image and the linear equation Y = 0.001722 X - 2
Topographic Avalanche Risk: DEM Sensitivity Analysis
NASA Astrophysics Data System (ADS)
Nazarkulova, Ainura; Strobl, Josef
2015-04-01
GIS-based models are frequently used to assess the risk and trigger probabilities of (snow) avalanche releases, based on parameters and geomorphometric derivatives like elevation, exposure, slope, proximity to ridges and local relief energy. Numerous models, and model-based specific applications and project results have been published based on a variety of approaches and parametrizations as well as calibrations. Digital Elevation Models (DEM) come with many different resolution (scale) and quality (accuracy) properties, some of these resulting from sensor characteristics and DEM generation algorithms, others from different DEM processing workflows and analysis strategies. This paper explores the impact of using different types and characteristics of DEMs for avalanche risk modeling approaches, and aims at establishing a framework for assessing the uncertainty of results. The research question is derived from simply demonstrating the differences in release risk areas and intensities by applying identical models to DEMs with different properties, and then extending this into a broader sensitivity analysis. For the quantification and calibration of uncertainty parameters different metrics are established, based on simple value ranges, probabilities, as well as fuzzy expressions and fractal metrics. As a specific approach the work on DEM resolution-dependent 'slope spectra' is being considered and linked with the specific application of geomorphometry-base risk assessment. For the purpose of this study focusing on DEM characteristics, factors like land cover, meteorological recordings and snowpack structure and transformation are kept constant, i.e. not considered explicitly. Key aims of the research presented here are the development of a multi-resolution and multi-scale framework supporting the consistent combination of large area basic risk assessment with local mitigation-oriented studies, and the transferability of the latter into areas without availability of
[Ecological sensitivity of Shanghai City based on GIS spatial analysis].
Cao, Jian-jun; Liu, Yong-juan
2010-07-01
In this paper, five sensitivity factors affecting the eco-environment of Shanghai City, i.e., rivers and lakes, historical relics and forest parks, geological disasters, soil pollution, and land use, were selected, and their weights were determined by analytic hierarchy process. Combining with GIS spatial analysis technique, the sensitivities of these factors were classified into four grades, i.e., highly sensitive, moderately sensitive, low sensitive, and insensitive, and the spatial distribution of the ecological sensitivity of Shanghai City was figured out. There existed a significant spatial differentiation in the ecological sensitivity of the City, and the insensitive, low sensitive, moderately sensitive, and highly sensitive areas occupied 37.07%, 5.94%, 38.16%, and 18.83%, respectively. Some suggestions on the City's zoning protection and construction were proposed. This study could provide scientific references for the City's environmental protection and economic development. PMID:20879541
Global analysis of the immune response
NASA Astrophysics Data System (ADS)
Ribeiro, Leonardo C.; Dickman, Ronald; Bernardes, Américo T.
2008-10-01
The immune system may be seen as a complex system, characterized using tools developed in the study of such systems, for example, surface roughness and its associated Hurst exponent. We analyze densitometric (Panama blot) profiles of immune reactivity, to classify individuals into groups with similar roughness statistics. We focus on a population of individuals living in a region in which malaria endemic, as well as a control group from a disease-free region. Our analysis groups individuals according to the presence, or absence, of malaria symptoms and number of malaria manifestations. Applied to the Panama blot data, our method proves more effective at discriminating between groups than principal-components analysis or super-paramagnetic clustering. Our findings provide evidence that some phenomena observed in the immune system can be only understood from a global point of view. We observe similar tendencies between experimental immune profiles and those of artificial profiles, obtained from an immune network model. The statistical entropy of the experimental profiles is found to exhibit variations similar to those observed in the Hurst exponent.
Sensitivity of water scarcity events to ENSO-driven climate variability at the global scale
NASA Astrophysics Data System (ADS)
Veldkamp, T. I. E.; Eisner, S.; Wada, Y.; Aerts, J. C. J. H.; Ward, P. J.
2015-10-01
Globally, freshwater shortage is one of the most dangerous risks for society. Changing hydro-climatic and socioeconomic conditions have aggravated water scarcity over the past decades. A wide range of studies show that water scarcity will intensify in the future, as a result of both increased consumptive water use and, in some regions, climate change. Although it is well-known that El Niño-Southern Oscillation (ENSO) affects patterns of precipitation and drought at global and regional scales, little attention has yet been paid to the impacts of climate variability on water scarcity conditions, despite its importance for adaptation planning. Therefore, we present the first global-scale sensitivity assessment of water scarcity to ENSO, the most dominant signal of climate variability. We show that over the time period 1961-2010, both water availability and water scarcity conditions are significantly correlated with ENSO-driven climate variability over a large proportion of the global land area (> 28.1 %); an area inhabited by more than 31.4 % of the global population. We also found, however, that climate variability alone is often not enough to trigger the actual incidence of water scarcity events. The sensitivity of a region to water scarcity events, expressed in terms of land area or population exposed, is determined by both hydro-climatic and socioeconomic conditions. Currently, the population actually impacted by water scarcity events consists of 39.6 % (CTA: consumption-to-availability ratio) and 41.1 % (WCI: water crowding index) of the global population, whilst only 11.4 % (CTA) and 15.9 % (WCI) of the global population is at the same time living in areas sensitive to ENSO-driven climate variability. These results are contrasted, however, by differences in growth rates found under changing socioeconomic conditions, which are relatively high in regions exposed to water scarcity events. Given the correlations found between ENSO and water availability and
Sensitivity of Water Scarcity Events to ENSO-Driven Climate Variability at the Global Scale
NASA Technical Reports Server (NTRS)
Veldkamp, T. I. E.; Eisner, S.; Wada, Y.; Aerts, J. C. J. H.; Ward, P. J.
2015-01-01
Globally, freshwater shortage is one of the most dangerous risks for society. Changing hydro-climatic and socioeconomic conditions have aggravated water scarcity over the past decades. A wide range of studies show that water scarcity will intensify in the future, as a result of both increased consumptive water use and, in some regions, climate change. Although it is well-known that El Niño- Southern Oscillation (ENSO) affects patterns of precipitation and drought at global and regional scales, little attention has yet been paid to the impacts of climate variability on water scarcity conditions, despite its importance for adaptation planning. Therefore, we present the first global-scale sensitivity assessment of water scarcity to ENSO, the most dominant signal of climate variability. We show that over the time period 1961-2010, both water availability and water scarcity conditions are significantly correlated with ENSO-driven climate variability over a large proportion of the global land area (> 28.1 %); an area inhabited by more than 31.4% of the global population. We also found, however, that climate variability alone is often not enough to trigger the actual incidence of water scarcity events. The sensitivity of a region to water scarcity events, expressed in terms of land area or population exposed, is determined by both hydro-climatic and socioeconomic conditions. Currently, the population actually impacted by water scarcity events consists of 39.6% (CTA: consumption-to-availability ratio) and 41.1% (WCI: water crowding index) of the global population, whilst only 11.4% (CTA) and 15.9% (WCI) of the global population is at the same time living in areas sensitive to ENSO-driven climate variability. These results are contrasted, however, by differences in growth rates found under changing socioeconomic conditions, which are relatively high in regions exposed to water scarcity events. Given the correlations found between ENSO and water availability and scarcity
Global-local finite element analysis of composite structures
Deibler, J.E.
1992-06-01
The development of layered finite elements has facilitated analysis of laminated composite structures. However, the analysis of a structure containing both isotropic and composite materials remains a difficult problem. A methodology has been developed to conduct a ``global-local`` finite element analysis. A ``global`` analysis of the entire structure is conducted at the appropriate loads with the composite portions replaced with an orthotropic material of equivalent materials properties. A ``local`` layered composite analysis is then conducted on the region of interest. The displacement results from the ``global`` analysis are used as loads to the ``local`` analysis. the laminate stresses and strains can then be examined and failure criteria evaluated.
Global-local finite element analysis of composite structures
Deibler, J.E.
1992-06-01
The development of layered finite elements has facilitated analysis of laminated composite structures. However, the analysis of a structure containing both isotropic and composite materials remains a difficult problem. A methodology has been developed to conduct a global-local'' finite element analysis. A global'' analysis of the entire structure is conducted at the appropriate loads with the composite portions replaced with an orthotropic material of equivalent materials properties. A local'' layered composite analysis is then conducted on the region of interest. The displacement results from the global'' analysis are used as loads to the local'' analysis. the laminate stresses and strains can then be examined and failure criteria evaluated.
NASA Astrophysics Data System (ADS)
Grose, Michael R.; Colman, Robert; Bhend, Jonas; Moise, Aurel F.
2016-07-01
The projected warming of surface air temperature at the global and regional scale by the end of the century is directly related to emissions and Earth's climate sensitivity. Projections are typically produced using an ensemble of climate models such as CMIP5, however the range of climate sensitivity in models doesn't cover the entire range considered plausible by expert judgment. Of particular interest from a risk-management perspective is the lower impact outcome associated with low climate sensitivity and the low-probability, high-impact outcomes associated with the top of the range. Here we scale climate model output to the limits of expert judgment of climate sensitivity to explore these limits. This scaling indicates an expanded range of projected change for each emissions pathway, including a much higher upper bound for both the globe and Australia. We find the possibility of exceeding a warming of 2 °C since pre-industrial is projected under high emissions for every model even scaled to the lowest estimate of sensitivity, and is possible under low emissions under most estimates of sensitivity. Although these are not quantitative projections, the results may be useful to inform thinking about the limits to change until the sensitivity can be more reliably constrained, or this expanded range of possibilities can be explored in a more formal way. When viewing climate projections, accounting for these low-probability but high-impact outcomes in a risk management approach can complement the focus on the likely range of projections. They can also highlight the scale of the potential reduction in range of projections, should tight constraints on climate sensitivity be established by future research.
NASA Astrophysics Data System (ADS)
Liu, Hongyu; Crawford, James H.; Considine, David B.; Platnick, Steven; Norris, Peter M.; Duncan, Bryan N.; Pierce, Robert B.; Chen, Gao; Yantosca, Robert M.
2009-05-01
Clouds directly affect tropospheric photochemistry through modification of solar radiation that determines photolysis frequencies. As a follow-up study to our recent assessment of these direct radiative effects of clouds on tropospheric chemistry, this paper presents an analysis of the sensitivity of such effects to cloud vertical distributions and optical properties (cloud optical depths (CODs) and cloud single scattering albedo), in a global three-dimensional (3-D) chemical transport model. The model was driven with a series of meteorological archives (GEOS-1 in support of the Stratospheric Tracers of Atmospheric Transport mission, or GEOS1-STRAT, GEOS-3, and GEOS-4) generated by the NASA Goddard Earth Observing System (GEOS) data assimilation system. Clouds in GEOS1-STRAT and GEOS-3 have more similar vertical distributions (with substantially smaller CODs in GEOS1-STRAT) while those in GEOS-4 are optically much thinner in the tropical upper troposphere. We find that the radiative impact of clouds on global photolysis frequencies and hydroxyl radical (OH) is more sensitive to the vertical distribution of clouds than to the magnitude of column CODs. With random vertical overlap for clouds, the model calculated changes in global mean OH (J(O1D), J(NO2)) due to the radiative effects of clouds in June are about 0.0% (0.4%, 0.9%), 0.8% (1.7%, 3.1%), and 7.3% (4.1%, 6.0%) for GEOS1-STRAT, GEOS-3, and GEOS-4, respectively; the geographic distributions of these quantities show much larger changes, with maximum decrease in OH concentrations of ˜15-35% near the midlatitude surface. The much larger global impact of clouds in GEOS-4 reflects the fact that more solar radiation is able to penetrate through the optically thin upper tropospheric clouds, increasing backscattering from low-level clouds. Model simulations with each of the three cloud distributions all show that the change in the global burden of ozone due to clouds is less than 5%. Model perturbation experiments
NASA Technical Reports Server (NTRS)
Liu, Hongyu; Crawford, James H.; Considine, David B.; Platnick, Steven; Norris, Peter M.; Duncan, Bryan N.; Pierce, Robert B.; Chen, Gao; Yantosca, Robert M.
2009-01-01
Clouds affect tropospheric photochemistry through modification of solar radiation that determines photolysis frequencies. As a follow-up study to our recent assessment of the radiative effects of clouds on tropospheric chemistry, this paper presents an analysis of the sensitivity of such effects to cloud vertical distributions and optical properties (cloud optical depths (CODs) and cloud single scattering albedo), in a global 3-D chemical transport model (GEOS-Chem). GEOS-Chem was driven with a series of meteorological archives (GEOS1- STRAT, GEOS-3 and GEOS-4) generated by the NASA Goddard Earth Observing System data assimilation system. Clouds in GEOS1-STRAT and GEOS-3 have more similar vertical distributions (with substantially smaller CODs in GEOS1-STRAT) while those in GEOS-4 are optically much thinner in the tropical upper troposphere. We find that the radiative impact of clouds on global photolysis frequencies and hydroxyl radical (OH) is more sensitive to the vertical distribution of clouds than to the magnitude of column CODs. With random vertical overlap for clouds, the model calculated changes in global mean OH (J(O1D), J(NO2)) due to the radiative effects of clouds in June are about 0.0% (0.4%, 0.9%), 0.8% (1.7%, 3.1%), and 7.3% (4.1%, 6.0%), for GEOS1-STRAT, GEOS-3 and GEOS-4, respectively; the geographic distributions of these quantities show much larger changes, with maximum decrease in OH concentrations of approx.15-35% near the midlatitude surface. The much larger global impact of clouds in GEOS-4 reflects the fact that more solar radiation is able to penetrate through the optically thin upper-tropospheric clouds, increasing backscattering from low-level clouds. Model simulations with each of the three cloud distributions all show that the change in the global burden of ozone due to clouds is less than 5%. Model perturbation experiments with GEOS-3, where the magnitude of 3-D CODs are progressively varied from -100% to 100%, predict only modest
A pathway analysis of global aerosol processes
NASA Astrophysics Data System (ADS)
Schutgens, Nick; Stier, Philip
2014-05-01
smaller modes. Our analysis also suggests that coagulation serves mainly as a loss process for number densities and that it is a relatively unimportant contributor to composition changes of aerosol. Our results provide an objective way of complexity analysis in a global aerosol model and will be used in future work where we will reduce this complexity in ECHAM-HAM.
Curtis, Janelle M.R.
2016-01-01
Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along
Naujokaitis-Lewis, Ilona; Curtis, Janelle M R
2016-01-01
Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along
Sensitivity analysis of channel-bend hydraulics influenced by vegetation
NASA Astrophysics Data System (ADS)
Bywater-Reyes, S.; Manners, R.; McDonald, R.; Wilcox, A. C.
2015-12-01
Alternating bars influence hydraulics by changing the force balance of channels as part of a morphodynamic feedback loop that dictates channel geometry. Pioneer woody riparian trees recruit on river bars and may steer flow, alter cross-stream and downstream force balances, and ultimately change channel morphology. Quantifying the influence of vegetation on stream hydraulics is difficult, and researchers increasingly rely on two-dimensional hydraulic models. In many cases, channel characteristics (channel drag and lateral eddy viscosity) and vegetation characteristics (density, frontal area, and drag coefficient) are uncertain. This study uses a beta version of FaSTMECH that models vegetation explicitly as a drag force to test the sensitivity of channel-bend hydraulics to riparian vegetation. We use a simplified, scale model of a meandering river with bars and conduct a global sensitivity analysis that ranks the influence of specified channel characteristics (channel drag and lateral eddy viscosity) against vegetation characteristics (density, frontal area, and drag coefficient) on cross-stream hydraulics. The primary influence on cross-stream velocity and shear stress is channel drag (i.e., bed roughness), followed by the near-equal influence of all vegetation parameters and lateral eddy viscosity. To test the implication of the sensitivity indices on bend hydraulics, we hold calibrated channel characteristics constant for a wandering gravel-bed river with bars (Bitterroot River, MT), and vary vegetation parameters on a bar. For a dense vegetation scenario, we find flow to be steered away from the bar, and velocity and shear stress to be reduced within the thalweg. This provides insight into how the morphodynamic evolution of vegetated bars differs from unvegetated bars.
NASA Astrophysics Data System (ADS)
Shin, Mun-Ju; Guillaume, Joseph H. A.; Croke, Barry F. W.; Jakeman, Anthony J.
2013-10-01
Sensitivity analysis (SA) is generally recognized as a worthwhile step to diagnose and remedy difficulties in identifying model parameters, and indeed in discriminating between model structures. An analysis of papers in three journals indicates that SA is a standard omission in hydrological modeling exercises. We provide some answers to ten reasonably generic questions using the Morris and Sobol SA methods, including to what extent sensitivities are dependent on parameter ranges selected, length of data period, catchment response type, model structures assumed and climatic forcing. Results presented demonstrate the sensitivity of four target functions to parameter variations of four rainfall-runoff models of varying complexity (4-13 parameters). Daily rainfall, streamflow and pan evaporation data are used from four 10-year data sets and from five catchments in the Australian Capital Territory (ACT) region. Similar results are obtained using the Morris and Sobol methods. It is shown how modelers can easily identify parameters that are insensitive, and how they might improve identifiability. Using a more complex objective function, however, may not result in all parameters becoming sensitive. Crucially, the results of the SA can be influenced by the parameter ranges selected. The length of data period required to characterize the sensitivities assuredly is a minimum of five years. The results confirm that only the simpler models have well-identified parameters, but parameter sensitivities vary between catchments. Answering these ten questions in other case studies is relatively easy using freely available software with the Hydromad and Sensitivity packages in R.
Sensitivity of water scarcity events to ENSO driven climate variability at the global scale
NASA Astrophysics Data System (ADS)
Veldkamp, T. I. E.; Eisner, S.; Wada, Y.; Aerts, J. C. J. H.; Ward, P. J.
2015-06-01
Globally, freshwater shortage is one of the most important risks for society. Changing hydro-climatic and socioeconomic conditions have aggravated water scarcity over the past decades. A wide range of studies show that water scarcity will intensify in the future, as a result of both increased consumptive water use and in some regions climate change However, less attention has been paid to the impacts of climate variability on water scarcity, despite its importance for adaptation planning. Therefore, we present the first global scale sensitivity assessment of water scarcity and water availability to El Niño-Southern Oscillation (ENSO), the most dominant signal of climate variability. We show that over the time period 1961-2010, both water availability and water scarcity conditions are significantly correlated with ENSO-driven climate variability over a large proportion of the global land area (> 28.1%); an area inhabited by more than 31.4% of the global population. We also found, however, that climate variability alone is often not enough to trigger the actual incidence of water scarcity events. The sensitivity of a region to water scarcity events, expressed in terms of land area or population impacted, is determined by both hydro-climatic and socioeconomic conditions. Currently, the population actually impacted by water scarcity events consists of 39.6% (water stress) and 41.1% (water shortage) of the global population whilst only 11.4% (water stress) and 15.9% (water shortage) of the global population is at the same time living in areas sensitive to ENSO driven climate variability. These results are contrasted however by differences in found growth rates under changing socioeconomic conditions, which are relatively high in regions affected by water scarcity events. Given the correlations found between ENSO and both water availability and water scarcity, and the relative developments of water scarcity impacts under changing socioeconomic conditions, we suggest
Global Carbon-Sink Sensitivity to Nitrogen: New Niches for Model-Data Symbiosis
NASA Astrophysics Data System (ADS)
Muller, S. J.; Gerber, S.
2011-12-01
To predict global environmental change it is crucial to determine the response of primary production to "CO2 fertilization". Dynamic global land models (DGLMs) are used to predict this long-term carbon (C) sink, but considerable disparity exists between the results obtained from different models. Constraining this divergence is necessary to reduce the uncertainty in our projections. To this end, recent model refinements have attempted to account for the important role that nitrogen (N) limitation plays in primary productivity. However, DGLMs rely on vast amounts of data, and thus far the focus has been on C. There is consequentially a paucity of global N data to support model benchmarking and forcing. Ideally, we strive to use model simulations and data collection symbiotically; each benefiting the other, and reciprocating in turn. It would therefore be valuable to know which measurable N variables would support better benchmarking that also improves C-sink predictions. This work endeavors to identify the optimal choice of 1) benchmark data to better constrain long-term C-sink predictions, and 2) the variables that should be measured, or measured better, to support these benchmarks, with particular attention paid to N in both cases. Here we use LM3V, a state of the art DGLM with coupled C-N cycling, to determine which processes most strongly control long-term C-sink. Concurrent sensitivity analyses are being performed to identify which of the model variables commonly used in benchmarking, or under consideration for future benchmarking (i.e. N variables), exhibit sensitivities that correlate with those identified for C-sink. Preliminary results indicate that long-term C-sink is strongly affected by factors that determine input or retention of N, as well as stoichiometric relationships. This sensitivity is most apparent as a long-term cumulative effect. At shorter time-scales many of the model outputs commonly used in benchmarking show relative insensitivity to these
Sensitivity of the global submarine hydrate inventory to scenarios of future climate change
NASA Astrophysics Data System (ADS)
Hunter, S. J.; Goldobin, D. S.; Haywood, A. M.; Ridgwell, A.; Rees, J. G.
2013-04-01
The global submarine inventory of methane hydrate is thought to be considerable. The stability of marine hydrates is sensitive to changes in temperature and pressure and once destabilised, hydrates release methane into sediments and ocean and potentially into the atmosphere, creating a positive feedback with climate change. Here we present results from a multi-model study investigating how the methane hydrate inventory dynamically responds to different scenarios of future climate and sea level change. The results indicate that a warming-induced reduction is dominant even when assuming rather extreme rates of sea level rise (up to 20 mm yr-1) under moderate warming scenarios (RCP 4.5). Over the next century modelled hydrate dissociation is focussed in the top ˜100m of Arctic and Subarctic sediments beneath <500m water depth. Predicted dissociation rates are particularly sensitive to the modelled vertical hydrate distribution within sediments. Under the worst case business-as-usual scenario (RCP 8.5), upper estimates of resulting global sea-floor methane fluxes could exceed estimates of natural global fluxes by 2100 (>30-50TgCH4yr-1), although subsequent oxidation in the water column could reduce peak atmospheric release rates to 0.75-1.4 Tg CH4 yr-1.
Ma, Hsi-Yen; Xiao, Heng; Mechoso, C. R.; Xue, Yongkang
2013-03-01
This study examines the sensitivity of global tropical climate to land surface processes (LSP) using an atmospheric general circulation model both uncoupled (with prescribed SSTs) and coupled to an oceanic general circulation model. The emphasis is on the interactive soil moisture and vegetation biophysical processes, which have first order influence on the surface energy and water budgets. The sensitivity to those processes is represented by the differences between model simulations, in which two land surface schemes are considered: 1) a simple land scheme that specifies surface albedo and soil moisture availability, and 2) the Simplified Simple Biosphere Model (SSiB), which allows for consideration of interactive soil moisture and vegetation biophysical process. Observational datasets are also employed to assess the reality of model-revealed sensitivity. The mean state sensitivity to different LSP is stronger in the coupled mode, especially in the tropical Pacific. Furthermore, seasonal cycle of SSTs in the equatorial Pacific, as well as ENSO frequency, amplitude, and locking to the seasonal cycle of SSTs are significantly modified and more realistic with SSiB. This outstanding sensitivity of the atmosphere-ocean system develops through changes in the intensity of equatorial Pacific trades modified by convection over land. Our results further demonstrate that the direct impact of land-atmosphere interactions on the tropical climate is modified by feedbacks associated with perturbed oceanic conditions ("indirect effect" of LSP). The magnitude of such indirect effect is strong enough to suggest that comprehensive studies on the importance of LSP on the global climate have to be made in a system that allows for atmosphere-ocean interactions.
Extended forward sensitivity analysis of one-dimensional isothermal flow
Johnson, M.; Zhao, H.
2013-07-01
Sensitivity analysis and uncertainty quantification is an important part of nuclear safety analysis. In this work, forward sensitivity analysis is used to compute solution sensitivities on 1-D fluid flow equations typical of those found in system level codes. Time step sensitivity analysis is included as a method for determining the accumulated error from time discretization. The ability to quantify numerical error arising from the time discretization is a unique and important feature of this method. By knowing the relative sensitivity of time step with other physical parameters, the simulation is allowed to run at optimized time steps without affecting the confidence of the physical parameter sensitivity results. The time step forward sensitivity analysis method can also replace the traditional time step convergence studies that are a key part of code verification with much less computational cost. One well-defined benchmark problem with manufactured solutions is utilized to verify the method; another test isothermal flow problem is used to demonstrate the extended forward sensitivity analysis process. Through these sample problems, the paper shows the feasibility and potential of using the forward sensitivity analysis method to quantify uncertainty in input parameters and time step size for a 1-D system-level thermal-hydraulic safety code. (authors)
Global stability analysis of electrified jets
NASA Astrophysics Data System (ADS)
Rivero-Rodriguez, Javier; Pérez-Saborid, Miguel
2014-11-01
Electrospinning is a common process used to produce micro and nano polymeric fibers. In this technique, the whipping mode of a very thin electrified jet generated in an electrospray device is nhanced in order to increase its elongation. In this work, we use a theoretical Eulerian model that describes the kinematics and dynamics of the midline of the jet, its radius and convective velocity. The model equations result from balances of mass, linear and angular momentum applied to any differential slice of the jet together with constitutive laws for viscous forces and moments, as well as appropriate expressions for capillary and electrical forces. As a first step towards computing the complete nonlinear, transient dynamics of the electrified jet, we have performed a global stability analysis of the forementioned equations and compared the results with experimental data obtained by Guillaume et al. [2011] and Guerrero-Millán et al. [2014]. The support of the Ministry of Science and Innovation of Spain (Project DPI 2010-20450-C03-02) is acknowledged.
Global reference analysis and visualization environment (GRAVE)
NASA Astrophysics Data System (ADS)
Rodgers, Todd K.; Cochand, Jeffrey A.; Sivak, Joseph A.
1993-03-01
The Global Reference Analysis and Visualization Environment (GRAVE) is a research prototype multimedia system that manages a diverse variety of data types and presents them to the user in a format that is geographically referenced ton the surface of a globe. When the user interacts with the globe, the system automatically manages the `level-of-detail' issues to support these user actions (allowing flexible functionality without sacrificing speed or information content). To manage the complexity of the presentation of the (visual) information to the user, data instantiations may be represented in an iconified format. When the icons are picked, or selected, the data `reveal' themselves in their `native' format. Object-oriented programming and data type constructs were employed, allowing a single`look and feel' to be presented to the user for the different media types. GRAVE currently supports the following data types: imagery (from various sources of differing resolution, coverage, and projection); elevation data (from DMA and USGS); physical simulation results (atmospherics, geological, hydrologic); video acquisitions; vector data (geographical, political boundaries); and textual reports. GRAVE was developed in the Application Visualization System (AVS) Visual Programming Environment (VPE); as such it is easily modifiable and reconfigurable, supporting the integration of new processing techniques/approaches as they become available or are developed.
Global-local methodologies and their application to nonlinear analysis
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
1989-01-01
An assessment is made of the potential of different global-local analysis strategies for predicting the nonlinear and postbuckling responses of structures. Two postbuckling problems of composite panels are used as benchmarks and the application of different global-local methodologies to these benchmarks is outlined. The key elements of each of the global-local strategies are discussed and future research areas needed to realize the full potential of global-local methodologies are identified.
Sensitivity analysis of geometric errors in additive manufacturing medical models.
Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian
2015-03-01
Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms. PMID:25649961
Design Parameters Influencing Reliability of CCGA Assembly: A Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Tasooji, Amaneh; Ghaffarian, Reza; Rinaldi, Antonio
2006-01-01
Area Array microelectronic packages with small pitch and large I/O counts are now widely used in microelectronics packaging. The impact of various package design and materials/process parameters on reliability has been studied through extensive literature review. Reliability of Ceramic Column Grid Array (CCGA) package assemblies has been evaluated using JPL thermal cycle test results (-50(deg)/75(deg)C, -55(deg)/100(deg)C, and -55(deg)/125(deg)C), as well as those reported by other investigators. A sensitivity analysis has been performed using the literature da to study the impact of design parameters and global/local stress conditions on assembly reliability. The applicability of various life-prediction models for CCGA design has been investigated by comparing model's predictions with the experimental thermal cycling data. Finite Element Method (FEM) analysis has been conducted to assess the state of the stress/strain in CCGA assembly under different thermal cycling, and to explain the different failure modes and locations observed in JPL test assemblies.
Value-Driven Design and Sensitivity Analysis of Hybrid Energy Systems using Surrogate Modeling
Wenbo Du; Humberto E. Garcia; William R. Binder; Christiaan J. J. Paredis
2001-10-01
A surrogate modeling and analysis methodology is applied to study dynamic hybrid energy systems (HES). The effect of battery size on the smoothing of variability in renewable energy generation is investigated. Global sensitivity indices calculated using surrogate models show the relative sensitivity of system variability to dynamic properties of key components. A value maximization approach is used to consider the tradeoff between system variability and required battery size. Results are found to be highly sensitive to the renewable power profile considered, demonstrating the importance of accurate renewable resource modeling and prediction. The documented computational framework and preliminary results represent an important step towards a comprehensive methodology for HES evaluation, design, and optimization.
Kleidon, Alex; Kravitz, Benjamin S.; Renner, Maik
2015-01-16
We derive analytic expressions of the transient response of the hydrological cycle to surface warming from an extremely simple energy balance model in which turbulent heat fluxes are constrained by the thermodynamic limit of maximum power. For a given magnitude of steady-state temperature change, this approach predicts the transient response as well as the steady-state change in surface energy partitioning and the hydrologic cycle. We show that the transient behavior of the simple model as well as the steady state hydrological sensitivities to greenhouse warming and solar geoengineering are comparable to results from simulations using highly complex models. Many of the global-scale hydrological cycle changes can be understood from a surface energy balance perspective, and our thermodynamically-constrained approach provides a physically robust way of estimating global hydrological changes in response to altered radiative forcing.
Sensitivity of tropospheric hydrogen peroxide to global chemical and climate change
NASA Technical Reports Server (NTRS)
Thompson, Anne M.; Stewart, Richard W.; Owens, Melody A.
1989-01-01
The sensitivities of tropospheric HO2 and hydrogen peroxide (H2O2) levels to increases in CH4, CO, and NO emissions and to changes in stratospheric O3 and tropospheric O3 and H2O have been evaluated with a one-dimensional photochemical model. Specific scenarios of CH4-CO-NO(x) emissions and global climate changes are used to predict HO2 and H2O2 changes between 1980 and 2030. Calculations are made for urban and nonurban continental conditions and for low latitudes. Generally, CO and CH4 emissions will enhance H2O2; NO emissions will suppress H2O2 except in very low NO(x) regions. A global warming or stratospheric O3 depletion will add to H2O2. Hydrogen peroxide increases from 1980 to 2030 could be 100 percent or more in the urban boundary layer.
NASA Technical Reports Server (NTRS)
Watkins, A. Neal; Leighty, Bradley D.; Lipford, William E.; Wong, Oliver D.; Oglesby, Donald M.; Ingram, JoAnne L.
2007-01-01
This paper will describe the results from a proof of concept test to examine the feasibility of using Pressure Sensitive Paint (PSP) to measure global surface pressures on rotorcraft blades in hover. The test was performed using the U.S. Army 2-meter Rotor Test Stand (2MRTS) and 15% scale swept rotor blades. Data were collected from five blades using both the intensity- and lifetime-based approaches. This paper will also outline several modifications and improvements that are underway to develop a system capable of measuring pressure distributions on up to four blades simultaneously at hover and forward flight conditions.
Grid sensitivity for aerodynamic optimization and flow analysis
NASA Technical Reports Server (NTRS)
Sadrehaghighi, I.; Tiwari, S. N.
1993-01-01
After reviewing relevant literature, it is apparent that one aspect of aerodynamic sensitivity analysis, namely grid sensitivity, has not been investigated extensively. The grid sensitivity algorithms in most of these studies are based on structural design models. Such models, although sufficient for preliminary or conceptional design, are not acceptable for detailed design analysis. Careless grid sensitivity evaluations, would introduce gradient errors within the sensitivity module, therefore, infecting the overall optimization process. Development of an efficient and reliable grid sensitivity module with special emphasis on aerodynamic applications appear essential. The organization of this study is as follows. The physical and geometric representations of a typical model are derived in chapter 2. The grid generation algorithm and boundary grid distribution are developed in chapter 3. Chapter 4 discusses the theoretical formulation and aerodynamic sensitivity equation. The method of solution is provided in chapter 5. The results are presented and discussed in chapter 6. Finally, some concluding remarks are provided in chapter 7.
Song, Chen; Schwarzkopf, Dietrich S.; Rees, Geraint
2013-01-01
The surface area of early visual cortices varies several fold across healthy adult humans and is genetically heritable. But the functional consequences of this anatomical variability are still largely unexplored. Here we show that interindividual variability in human visual cortical surface area reflects a tradeoff between sensitivity to visual details and susceptibility to visual context. Specifically, individuals with larger primary visual cortices can discriminate finer orientation differences, whereas individuals with smaller primary visual cortices experience stronger perceptual modulation by global orientation contexts. This anatomically correlated tradeoff between discrimination sensitivity and contextual modulation of orientation perception, however, does not generalize to contrast perception or luminance perception. Neural field simulations based on a scaling of intracortical circuits reproduce our empirical observations. Together our findings reveal a feature-specific shift in the scope of visual perception from context-oriented to detail-oriented with increased visual cortical surface area. PMID:23887643
Automated sensitivity analysis using the GRESS language
Pin, F.G.; Oblow, E.M.; Wright, R.Q.
1986-04-01
An automated procedure for performing large-scale sensitivity studies based on the use of computer calculus is presented. The procedure is embodied in a FORTRAN precompiler called GRESS, which automatically processes computer models and adds derivative-taking capabilities to the normal calculated results. In this report, the GRESS code is described, tested against analytic and numerical test problems, and then applied to a major geohydrological modeling problem. The SWENT nuclear waste repository modeling code is used as the basis for these studies. Results for all problems are discussed in detail. Conclusions are drawn as to the applicability of GRESS in the problems at hand and for more general large-scale modeling sensitivity studies.
Sensitivity analysis of Stirling engine design parameters
Naso, V.; Dong, W.; Lucentini, M.; Capata, R.
1998-07-01
In the preliminary Stirling engine design process, the values of some design parameters (temperature ratio, swept volume ratio, phase angle and dead volume ratio) have to be assumed; as a matter of fact it can be difficult to determine the best values of these parameters for a particular engine design. In this paper, a mathematical model is developed to analyze the sensitivity of engine's performance variations corresponding to variations of these parameters.
Discrete analysis of spatial-sensitivity models
NASA Technical Reports Server (NTRS)
Nielsen, Kenneth R. K.; Wandell, Brian A.
1988-01-01
Procedures for reducing the computational burden of current models of spatial vision are described, the simplifications being consistent with the prediction of the complete model. A method for using pattern-sensitivity measurements to estimate the initial linear transformation is also proposed which is based on the assumption that detection performance is monotonic with the vector length of the sensor responses. It is shown how contrast-threshold data can be used to estimate the linear transformation needed to characterize threshold performance.
Global Analysis of Photosynthesis Transcriptional Regulatory Networks
Imam, Saheed; Noguera, Daniel R.; Donohue, Timothy J.
2014-01-01
Photosynthesis is a crucial biological process that depends on the interplay of many components. This work analyzed the gene targets for 4 transcription factors: FnrL, PrrA, CrpK and MppG (RSP_2888), which are known or predicted to control photosynthesis in Rhodobacter sphaeroides. Chromatin immunoprecipitation followed by high-throughput sequencing (ChIP-seq) identified 52 operons under direct control of FnrL, illustrating its regulatory role in photosynthesis, iron homeostasis, nitrogen metabolism and regulation of sRNA synthesis. Using global gene expression analysis combined with ChIP-seq, we mapped the regulons of PrrA, CrpK and MppG. PrrA regulates ∼34 operons encoding mainly photosynthesis and electron transport functions, while CrpK, a previously uncharacterized Crp-family protein, regulates genes involved in photosynthesis and maintenance of iron homeostasis. Furthermore, CrpK and FnrL share similar DNA binding determinants, possibly explaining our observation of the ability of CrpK to partially compensate for the growth defects of a ΔFnrL mutant. We show that the Rrf2 family protein, MppG, plays an important role in photopigment biosynthesis, as part of an incoherent feed-forward loop with PrrA. Our results reveal a previously unrealized, high degree of combinatorial regulation of photosynthetic genes and significant cross-talk between their transcriptional regulators, while illustrating previously unidentified links between photosynthesis and the maintenance of iron homeostasis. PMID:25503406
Global analysis of photosynthesis transcriptional regulatory networks.
Imam, Saheed; Noguera, Daniel R; Donohue, Timothy J
2014-12-01
Photosynthesis is a crucial biological process that depends on the interplay of many components. This work analyzed the gene targets for 4 transcription factors: FnrL, PrrA, CrpK and MppG (RSP_2888), which are known or predicted to control photosynthesis in Rhodobacter sphaeroides. Chromatin immunoprecipitation followed by high-throughput sequencing (ChIP-seq) identified 52 operons under direct control of FnrL, illustrating its regulatory role in photosynthesis, iron homeostasis, nitrogen metabolism and regulation of sRNA synthesis. Using global gene expression analysis combined with ChIP-seq, we mapped the regulons of PrrA, CrpK and MppG. PrrA regulates ∼34 operons encoding mainly photosynthesis and electron transport functions, while CrpK, a previously uncharacterized Crp-family protein, regulates genes involved in photosynthesis and maintenance of iron homeostasis. Furthermore, CrpK and FnrL share similar DNA binding determinants, possibly explaining our observation of the ability of CrpK to partially compensate for the growth defects of a ΔFnrL mutant. We show that the Rrf2 family protein, MppG, plays an important role in photopigment biosynthesis, as part of an incoherent feed-forward loop with PrrA. Our results reveal a previously unrealized, high degree of combinatorial regulation of photosynthetic genes and significant cross-talk between their transcriptional regulators, while illustrating previously unidentified links between photosynthesis and the maintenance of iron homeostasis. PMID:25503406
Fuzzy sensitivity analysis for reliability assessment of building structures
NASA Astrophysics Data System (ADS)
Kala, Zdeněk
2016-06-01
The mathematical concept of fuzzy sensitivity analysis, which studies the effects of the fuzziness of input fuzzy numbers on the fuzziness of the output fuzzy number, is described in the article. The output fuzzy number is evaluated using Zadeh's general extension principle. The contribution of stochastic and fuzzy uncertainty in reliability analysis tasks of building structures is discussed. The algorithm of fuzzy sensitivity analysis is an alternative to stochastic sensitivity analysis in tasks in which input and output variables are considered as fuzzy numbers.
Zajac, Zuzanna; Stith, Bradley M.; Bowling, Andrea C.; Langtimm, Catherine A.; Swain, Eric D.
2015-01-01
Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust
Zajac, Zuzanna; Stith, Bradley; Bowling, Andrea C; Langtimm, Catherine A; Swain, Eric D
2015-01-01
Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust
Zajac, Zuzanna; Stith, Bradley; Bowling, Andrea C; Langtimm, Catherine A; Swain, Eric D
2015-07-01
Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust
Measuring global temperatures: Their analysis and interpretation
NASA Astrophysics Data System (ADS)
Pielke, Roger A., Sr.
2011-07-01
This book documents how global surface temperature anomalies (GSTAs) and multidecadal trends are obtained. While ocean heat content change is a more robust metric with which to diagnose global warming, GSTAs have become a primary icon in the climate change debate. The book begins with a brief overview chapter of the Earth's radiative energy budget followed by two chapters on measurement approaches to monitoring temperature, including an interesting discussion of temperature scales. Chapters 4-6 concern measuring land and ocean temperatures. Chapters 7 and 8 discuss global networks and how point measurements are converted to obtain global averages. Chapter 9 focuses on changes in time of temperatures, including maximum and minimum values. This is followed by a short chapter on temperature profiles through the atmosphere and a final chapter of recommendations for future observations of this metric.
NASA Technical Reports Server (NTRS)
Winters, J. M.; Stark, L.
1984-01-01
Original results for a newly developed eight-order nonlinear limb antagonistic muscle model of elbow flexion and extension are presented. A wider variety of sensitivity analysis techniques are used and a systematic protocol is established that shows how the different methods can be used efficiently to complement one another for maximum insight into model sensitivity. It is explicitly shown how the sensitivity of output behaviors to model parameters is a function of the controller input sequence, i.e., of the movement task. When the task is changed (for instance, from an input sequence that results in the usual fast movement task to a slower movement that may also involve external loading, etc.) the set of parameters with high sensitivity will in general also change. Such task-specific use of sensitivity analysis techniques identifies the set of parameters most important for a given task, and even suggests task-specific model reduction possibilities.
Ringed Seal Search for Global Optimization via a Sensitive Search Model.
Saadi, Younes; Yanto, Iwan Tri Riyadi; Herawan, Tutut; Balakrishnan, Vimala; Chiroma, Haruna; Risnumawan, Anhar
2016-01-01
The efficiency of a metaheuristic algorithm for global optimization is based on its ability to search and find the global optimum. However, a good search often requires to be balanced between exploration and exploitation of the search space. In this paper, a new metaheuristic algorithm called Ringed Seal Search (RSS) is introduced. It is inspired by the natural behavior of the seal pup. This algorithm mimics the seal pup movement behavior and its ability to search and choose the best lair to escape predators. The scenario starts once the seal mother gives birth to a new pup in a birthing lair that is constructed for this purpose. The seal pup strategy consists of searching and selecting the best lair by performing a random walk to find a new lair. Affected by the sensitive nature of seals against external noise emitted by predators, the random walk of the seal pup takes two different search states, normal state and urgent state. In the normal state, the pup performs an intensive search between closely adjacent lairs; this movement is modeled via a Brownian walk. In an urgent state, the pup leaves the proximity area and performs an extensive search to find a new lair from sparse targets; this movement is modeled via a Levy walk. The switch between these two states is realized by the random noise emitted by predators. The algorithm keeps switching between normal and urgent states until the global optimum is reached. Tests and validations were performed using fifteen benchmark test functions to compare the performance of RSS with other baseline algorithms. The results show that RSS is more efficient than Genetic Algorithm, Particles Swarm Optimization and Cuckoo Search in terms of convergence rate to the global optimum. The RSS shows an improvement in terms of balance between exploration (extensive) and exploitation (intensive) of the search space. The RSS can efficiently mimic seal pups behavior to find best lair and provide a new algorithm to be used in global
Ringed Seal Search for Global Optimization via a Sensitive Search Model
Saadi, Younes; Yanto, Iwan Tri Riyadi; Herawan, Tutut; Balakrishnan, Vimala; Chiroma, Haruna; Risnumawan, Anhar
2016-01-01
The efficiency of a metaheuristic algorithm for global optimization is based on its ability to search and find the global optimum. However, a good search often requires to be balanced between exploration and exploitation of the search space. In this paper, a new metaheuristic algorithm called Ringed Seal Search (RSS) is introduced. It is inspired by the natural behavior of the seal pup. This algorithm mimics the seal pup movement behavior and its ability to search and choose the best lair to escape predators. The scenario starts once the seal mother gives birth to a new pup in a birthing lair that is constructed for this purpose. The seal pup strategy consists of searching and selecting the best lair by performing a random walk to find a new lair. Affected by the sensitive nature of seals against external noise emitted by predators, the random walk of the seal pup takes two different search states, normal state and urgent state. In the normal state, the pup performs an intensive search between closely adjacent lairs; this movement is modeled via a Brownian walk. In an urgent state, the pup leaves the proximity area and performs an extensive search to find a new lair from sparse targets; this movement is modeled via a Levy walk. The switch between these two states is realized by the random noise emitted by predators. The algorithm keeps switching between normal and urgent states until the global optimum is reached. Tests and validations were performed using fifteen benchmark test functions to compare the performance of RSS with other baseline algorithms. The results show that RSS is more efficient than Genetic Algorithm, Particles Swarm Optimization and Cuckoo Search in terms of convergence rate to the global optimum. The RSS shows an improvement in terms of balance between exploration (extensive) and exploitation (intensive) of the search space. The RSS can efficiently mimic seal pups behavior to find best lair and provide a new algorithm to be used in global
Haberl, Helmut; Erb, Karl-Heinz; Krausmann, Fridolin; Bondeau, Alberte; Lauk, Christian; Müller, Christoph; Plutzar, Christoph; Steinberger, Julia K.
2011-01-01
There is a growing recognition that the interrelations between agriculture, food, bioenergy, and climate change have to be better understood in order to derive more realistic estimates of future bioenergy potentials. This article estimates global bioenergy potentials in the year 2050, following a “food first” approach. It presents integrated food, livestock, agriculture, and bioenergy scenarios for the year 2050 based on a consistent representation of FAO projections of future agricultural development in a global biomass balance model. The model discerns 11 regions, 10 crop aggregates, 2 livestock aggregates, and 10 food aggregates. It incorporates detailed accounts of land use, global net primary production (NPP) and its human appropriation as well as socioeconomic biomass flow balances for the year 2000 that are modified according to a set of scenario assumptions to derive the biomass potential for 2050. We calculate the amount of biomass required to feed humans and livestock, considering losses between biomass supply and provision of final products. Based on this biomass balance as well as on global land-use data, we evaluate the potential to grow bioenergy crops and estimate the residue potentials from cropland (forestry is outside the scope of this study). We assess the sensitivity of the biomass potential to assumptions on diets, agricultural yields, cropland expansion and climate change. We use the dynamic global vegetation model LPJmL to evaluate possible impacts of changes in temperature, precipitation, and elevated CO2 on agricultural yields. We find that the gross (primary) bioenergy potential ranges from 64 to 161 EJ y−1, depending on climate impact, yields and diet, while the dependency on cropland expansion is weak. We conclude that food requirements for a growing world population, in particular feed required for livestock, strongly influence bioenergy potentials, and that integrated approaches are needed to optimize food and bioenergy supply
Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks
Arampatzis, Georgios; Katsoulakis, Markos A.; Pantazis, Yannis
2015-01-01
Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in “sloppy” systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over
NASA Astrophysics Data System (ADS)
Piecuch, Christopher; Heimbach, Patrick; Ponte, Rui; Forget, Gael
2015-04-01
An ocean general circulation model in a global configuration, constrained to observations over the period 1993-2010 as part of the ECCO (Estimating the Circulation and Climate of the Ocean) project, has been used to to infer the influence of geothermal flow on estimates of contemporary sea level changes. Two distinct simulations are compared, which differ only with regard to whether they apply geothermal flow as a bottom boundary condition. Geothermal flow forcing increases the global mean sea level trend over 1993-2010 by 0.11 mm yr-1 in the perturbation simulation relative to the control simulation with no geothermal forcing, mostly due to increased net thermal expansion in the deep ocean (below 2000 m). The Southern Ocean is particularly sensitive to geothermal flow, with differences between regional sea level trends from the perturbation and control simulations up to ±1 mm yr-1 in some places. More generally, it is suggested that ocean heat transports redistribute the geothermal input along constant pressure surfaces and constant surfaces of temperature or salinity. This redistribution of heat results in stronger (weaker) steric height trend differences between the two solutions over deeper (shallower) areas, and effects anomalous redistribution of ocean mass from deeper to shallower areas in the perturbation solution relative to the control solution. Given the sparsity of heat flow measurements, ocean state estimation could (in principle) be a means to the end of constraining solid Earth heat flow estimates over the global ocean.
NASA Astrophysics Data System (ADS)
Centoni, Federico; Stevenson, David; Fowler, David; Nemitz, Eiko; Coyle, Mhairi
2015-04-01
Concentrations of ozone at the surface are strongly affected by deposition to the surface. Deposition processes are very sensitive to temperature and relative humidity at the surface and are expected to respond to global change, with implications for both air quality and ecosystem services. Many studies have shown that ozone stomatal uptake by vegetation typically accounts for 40-60% of total deposition on average and the other part which occurs through non-stomatal pathways is not constant. Flux measurements show that non-stomatal removal increases with temperature and under wet conditions. There are large uncertainties in parameterising the non-stomatal ozone deposition term in climate chemistry models and model predictions vary greatly. In addition, different model treatments of dry deposition constitute a source of inter-model variability in surface ozone predictions. The main features of the original Unified Model-UK Chemistry and Aerosols (UM-UKCA) dry deposition scheme and the Zhang et al. 2003 scheme, which introduces in UM-UKCA a more developed non-stomatal deposition approach, are presented. This study also estimates the relative contributions of ozone flux via stomatal and non-stomatal uptakes at the global scale, and explores the sensitivity of simulated surface ozone and ozone deposition flux by implementing different non-stomatal parameterization terms. With a view to exploring the potential influence of future climate, we present results showing the effects of variations in some meteorological parameters on present day (2000) global ozone predictions. In particular, this study revealed that the implementation of a more mechanistic representation of the non-stomatal deposition in UM-UKCA model along with a decreased stomatal uptake due to the effect of blocking under wet conditions, accounted for a substantial reduction of ozone fluxes to broadleaf trees in the tropics with an increase of annual mean surface ozone. On the contrary, a large increase of
Global Human Settlement Analysis for Disaster Risk Reduction
NASA Astrophysics Data System (ADS)
Pesaresi, M.; Ehrlich, D.; Ferri, S.; Florczyk, A.; Freire, S.; Haag, F.; Halkia, M.; Julea, A. M.; Kemper, T.; Soille, P.
2015-04-01
The Global Human Settlement Layer (GHSL) is supported by the European Commission, Joint Research Center (JRC) in the frame of his institutional research activities. Scope of GHSL is developing, testing and applying the technologies and analysis methods integrated in the JRC Global Human Settlement analysis platform for applications in support to global disaster risk reduction initiatives (DRR) and regional analysis in the frame of the European Cohesion policy. GHSL analysis platform uses geo-spatial data, primarily remotely sensed and population. GHSL also cooperates with the Group on Earth Observation on SB-04-Global Urban Observation and Information, and various international partners andWorld Bank and United Nations agencies. Some preliminary results integrating global human settlement information extracted from Landsat data records of the last 40 years and population data are presented.
Is globalization healthy: a statistical indicator analysis of the impacts of globalization on health
2010-01-01
It is clear that globalization is something more than a purely economic phenomenon manifesting itself on a global scale. Among the visible manifestations of globalization are the greater international movement of goods and services, financial capital, information and people. In addition, there are technological developments, more transboundary cultural exchanges, facilitated by the freer trade of more differentiated products as well as by tourism and immigration, changes in the political landscape and ecological consequences. In this paper, we link the Maastricht Globalization Index with health indicators to analyse if more globalized countries are doing better in terms of infant mortality rate, under-five mortality rate, and adult mortality rate. The results indicate a positive association between a high level of globalization and low mortality rates. In view of the arguments that globalization provides winners and losers, and might be seen as a disequalizing process, we should perhaps be careful in interpreting the observed positive association as simple evidence that globalization is mostly good for our health. It is our hope that a further analysis of health impacts of globalization may help in adjusting and optimising the process of globalization on every level in the direction of a sustainable and healthy development for all. PMID:20849605
Boundary formulations for sensitivity analysis without matrix derivatives
NASA Technical Reports Server (NTRS)
Kane, J. H.; Guru Prasad, K.
1993-01-01
A new hybrid approach to continuum structural shape sensitivity analysis employing boundary element analysis (BEA) is presented. The approach uses iterative reanalysis to obviate the need to factor perturbed matrices in the determination of surface displacement and traction sensitivities via a univariate perturbation/finite difference (UPFD) step. The UPFD approach makes it possible to immediately reuse existing subroutines for computation of BEA matrix coefficients in the design sensitivity analysis process. The reanalysis technique computes economical response of univariately perturbed models without factoring perturbed matrices. The approach provides substantial computational economy without the burden of a large-scale reprogramming effort.
Partial Differential Algebraic Sensitivity Analysis Code
1995-05-15
PDASAC solves stiff, nonlinear initial-boundary-value in a timelike dimension t and a space dimension x. Plane, circular cylindrical or spherical boundaries can be handled. Mixed-order systems of partial differential and algebraic equations can be analyzed with members of order or 0 or 1 in t, 0,1 or 2 in x. Parametric sensitivities of the calculated states are compted simultaneously on request, via the Jacobian of the state equations. Initial and boundary conditions are efficiently reconciled.more » Local error control (in the max-norm or the 2-norm) is provided for the state vector and can include the parametric sensitivites if desired.« less
Aero-Structural Interaction, Analysis, and Shape Sensitivity
NASA Technical Reports Server (NTRS)
Newman, James C., III
1999-01-01
A multidisciplinary sensitivity analysis technique that has been shown to be independent of step-size selection is examined further. The accuracy of this step-size independent technique, which uses complex variables for determining sensitivity derivatives, has been previously established. The primary focus of this work is to validate the aero-structural analysis procedure currently being used. This validation consists of comparing computed and experimental data obtained for an Aeroelastic Research Wing (ARW-2). Since the aero-structural analysis procedure has the complex variable modifications already included into the software, sensitivity derivatives can automatically be computed. Other than for design purposes, sensitivity derivatives can be used for predicting the solution at nearby conditions. The use of sensitivity derivatives for predicting the aero-structural characteristics of this configuration is demonstrated.
Automating sensitivity analysis of computer models using computer calculus
Oblow, E.M.; Pin, F.G.
1985-01-01
An automated procedure for performing sensitivity analyses has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with ''direct'' and ''adjoint'' sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies. 24 refs., 2 figs.
Automated procedure for sensitivity analysis using computer calculus
Oblow, E.M.
1983-05-01
An automated procedure for performing sensitivity analyses has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with direct and adjoint sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach was found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies.
NASA Astrophysics Data System (ADS)
Müller Schmied, Hannes; Eisner, Stephanie; Franz, Daniela; Wattenbach, Martin
2013-04-01
Large scale hydrological models and land surface models are applied to simulate the global terrestrial water cycle and to estimate global renewable water resources. In recent years the growing availability of global data sets to force and constrain these models, e.g. remote sensing and reanalysis products, has essentially improved estimates of renewable water resources. However, results still vary significantly between models and/or input data sets highlighting the uncertainty of those estimates. In this study, we will test the sensitivity of simulated renewable water resources to climate and land use data sets and to varying model complexity using the global hydrological model WaterGAP (Water Global Analysis and Prognosis), version 2.2. The model is calibrated against observed discharge records by adjusting one independent parameter, which controls the fraction of total runoff from effective precipitation. The aim is to minimize the discrepancy in simulated long-term annual discharge compared to measured ones. Due to e.g. model structure or input data uncertainty this calibration procedure is not successful in all river basins, i.e. simulated long-term annual discharge still deviates more than +/- 1 % from the observed one. In these cases, correction factors are applied to avoid error propagation to downstream catchments. In this context, we define calibration success as the ability to calibrate with a minimum of correction factors, which is an indicator of the model's ability (including the underlying input data) to reproduce observed long term discharge. In order to assess the impact of different input data sets and modified model structure on calibration success, model calibration was performed in three different experimental setups: (1) WaterGAP was forced with different climate input data sets (WATCH Forcing Data; CRU TS 3.2/GPCC v.6) to evaluate the impact of climate input, especially precipitation; (2) WaterGAP simulations were based on two different global
A topological approach to computer-aided sensitivity analysis
NASA Technical Reports Server (NTRS)
Chan, S. P.; Munoz, R. M.
1971-01-01
Sensitivities of any arbitrary system are calculated using general purpose digital computer with available software packages for transfer function analysis. Sensitivity shows how element variation within system affects system performance. Signal flow graph illustrates topological system behavior and relationship among parameters in system.
Advanced Fuel Cycle Economic Sensitivity Analysis
David Shropshire; Kent Williams; J.D. Smith; Brent Boore
2006-12-01
A fuel cycle economic analysis was performed on four fuel cycles to provide a baseline for initial cost comparison using the Gen IV Economic Modeling Work Group G4 ECON spreadsheet model, Decision Programming Language software, the 2006 Advanced Fuel Cycle Cost Basis report, industry cost data, international papers, the nuclear power related cost study from MIT, Harvard, and the University of Chicago. The analysis developed and compared the fuel cycle cost component of the total cost of energy for a wide range of fuel cycles including: once through, thermal with fast recycle, continuous fast recycle, and thermal recycle.
Global Proteome Analysis of Leptospira interrogans
Technology Transfer Automated Retrieval System (TEKTRAN)
Comparative global proteome analyses were performed on Leptospira interrogans serovar Copenhageni grown under conventional in vitro conditions and those mimicking in vivo conditions (iron limitation and serum presence). Proteomic analyses were conducted using iTRAQ and LC-ESI-tandem mass spectrometr...
Toward Global Content Analysis and Media Criticism.
ERIC Educational Resources Information Center
Nordenstreng, Kaarle
1995-01-01
Presents the background, rationale, and implementation prospects for an international system of monitoring media coverage of global problems such as peace and war, human rights, and the environment. Outlines the monitoring project carried out in January 1995 concerning the representation and portrayal of women in news media. (SR)
Sensitivity Analysis in Complex Plasma Chemistry Models
NASA Astrophysics Data System (ADS)
Turner, Miles
2015-09-01
The purpose of a plasma chemistry model is prediction of chemical species densities, including understanding the mechanisms by which such species are formed. These aims are compromised by an uncertain knowledge of the rate constants included in the model, which directly causes uncertainty in the model predictions. We recently showed that this predictive uncertainty can be large--a factor of ten or more in some cases. There is probably no context in which a plasma chemistry model might be used where the existence of uncertainty on this scale could not be a matter of concern. A question that at once follows is: Which rate constants cause such uncertainty? In the present paper we show how this question can be answered by applying a systematic screening procedure--the so-called Morris method--to identify sensitive rate constants. We investigate the topical example of the helium-oxygen chemistry. Beginning with a model with almost four hundred reactions, we show that only about fifty rate constants materially affect the model results, and as few as ten cause most of the uncertainty. This means that the model can be improved, and the uncertainty substantially reduced, by focussing attention on this tractably small set of rate constants. Work supported by Science Foundation Ireland under grant08/SRC/I1411, and by COST Action MP1101 ``Biomedical Applications of Atmospheric Pressure Plasmas.''
Global sensitivity of high-resolution estimates of crop water footprint
NASA Astrophysics Data System (ADS)
Tuninetti, Marta; Tamea, Stefania; D'Odorico, Paolo; Laio, Francesco; Ridolfi, Luca
2015-10-01
Most of the human appropriation of freshwater resources is for agriculture. Water availability is a major constraint to mankind's ability to produce food. The notion of virtual water content (VWC), also known as crop water footprint, provides an effective tool to investigate the linkage between food and water resources as a function of climate, soil, and agricultural practices. The spatial variability in the virtual water content of crops is here explored, disentangling its dependency on climate and crop yields and assessing the sensitivity of VWC estimates to parameter variability and uncertainty. Here we calculate the virtual water content of four staple crops (i.e., wheat, rice, maize, and soybean) for the entire world developing a high-resolution (5 × 5 arc min) model, and we evaluate the VWC sensitivity to input parameters. We find that food production almost entirely depends on green water (>90%), but, when applied, irrigation makes crop production more water efficient, thus requiring less water. The spatial variability of the VWC is mostly controlled by the spatial patterns of crop yields with an average correlation coefficient of 0.83. The results of the sensitivity analysis show that wheat is most sensitive to the length of the growing period, rice to reference evapotranspiration, maize and soybean to the crop planting date. The VWC sensitivity varies not only among crops, but also across the harvested areas of the world, even at the subnational scale.
Selecting step sizes in sensitivity analysis by finite differences
NASA Technical Reports Server (NTRS)
Iott, J.; Haftka, R. T.; Adelman, H. M.
1985-01-01
This paper deals with methods for obtaining near-optimum step sizes for finite difference approximations to first derivatives with particular application to sensitivity analysis. A technique denoted the finite difference (FD) algorithm, previously described in the literature and applicable to one derivative at a time, is extended to the calculation of several simultaneously. Both the original and extended FD algorithms are applied to sensitivity analysis for a data-fitting problem in which derivatives of the coefficients of an interpolation polynomial are calculated with respect to uncertainties in the data. The methods are also applied to sensitivity analysis of the structural response of a finite-element-modeled swept wing. In a previous study, this sensitivity analysis of the swept wing required a time-consuming trial-and-error effort to obtain a suitable step size, but it proved to be a routine application for the extended FD algorithm herein.
Parameter sensitivity analysis for pesticide impacts on honeybee colonies
We employ Monte Carlo simulation and linear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed that simulate hive population trajectories, taking into account queen strength, foraging success, weather, colo...
SYSTEMATIC SENSITIVITY ANALYSIS OF AIR QUALITY SIMULATION MODELS
This report reviews and assesses systematic sensitivity and uncertainty analysis methods for applications to air quality simulation models. The discussion of the candidate methods presents their basic variables, mathematical foundations, user motivations and preferences, computer...
On the sensitivity analysis of porous material models
NASA Astrophysics Data System (ADS)
Ouisse, Morvan; Ichchou, Mohamed; Chedly, Slaheddine; Collet, Manuel
2012-11-01
Porous materials are used in many vibroacoustic applications. Different available models describe their behaviors according to materials' intrinsic characteristics. For instance, in the case of porous material with rigid frame, and according to the Champoux-Allard model, five parameters are employed. In this paper, an investigation about this model sensitivity to parameters according to frequency is conducted. Sobol and FAST algorithms are used for sensitivity analysis. A strong parametric frequency dependent hierarchy is shown. Sensitivity investigations confirm that resistivity is the most influent parameter when acoustic absorption and surface impedance of porous materials with rigid frame are considered. The analysis is first performed on a wide category of porous materials, and then restricted to a polyurethane foam analysis in order to illustrate the impact of the reduction of the design space. In a second part, a sensitivity analysis is performed using the Biot-Allard model with nine parameters including mechanical effects of the frame and conclusions are drawn through numerical simulations.
NASA Astrophysics Data System (ADS)
Poulter, Benjamin; Cadule, Patricia; Cheiney, Audrey; Ciais, Philippe; Hodson, Elke; Peylin, Philippe; Plummer, Stephen; Spessa, Allan; Saatchi, Sassan; Yue, Chao; Zimmermann, Niklaus E.
2015-02-01
Fire plays an important role in terrestrial ecosystems by regulating biogeochemistry, biogeography, and energy budgets, yet despite the importance of fire as an integral ecosystem process, significant advances remain to improve its prognostic representation in carbon cycle models. To recommend and to help prioritize model improvements, this study investigates the sensitivity of a coupled global biogeography and biogeochemistry model, LPJ, to observed burned area measured by three independent satellite-derived products, GFED v3.1, L3JRC, and GlobCarbon. Model variables are compared with benchmarks that include pantropical aboveground biomass, global tree cover, and CO2 and CO trace gas concentrations. Depending on prescribed burned area product, global aboveground carbon stocks varied by 300 Pg C, and woody cover ranged from 50 to 73 Mkm2. Tree cover and biomass were both reduced linearly with increasing burned area, i.e., at regional scales, a 10% reduction in tree cover per 1000 km2, and 0.04-to-0.40 Mg C reduction per 1000 km2. In boreal regions, satellite burned area improved simulated tree cover and biomass distributions, but in savanna regions, model-data correlations decreased. Global net biome production was relatively insensitive to burned area, and the long-term land carbon sink was robust, ~2.5 Pg C yr-1, suggesting that feedbacks from ecosystem respiration compensated for reductions in fuel consumption via fire. CO2 transport provided further evidence that heterotrophic respiration compensated any emission reductions in the absence of fire, with minor differences in modeled CO2 fluxes among burned area products. CO was a more sensitive indicator for evaluating fire emissions, with MODIS-GFED burned area producing CO concentrations largely in agreement with independent observations in high latitudes. This study illustrates how ensembles of burned area data sets can be used to diagnose model structures and parameters for further improvement and also
NASA Technical Reports Server (NTRS)
Dong, Stanley B.
1989-01-01
An important consideration in the global local finite-element method (GLFEM) is the availability of global functions for the given problem. The role and mathematical requirements of these global functions in a GLFEM analysis of localized stress states in prismatic structures are discussed. A method is described for determining these global functions. Underlying this method are theorems due to Toupin and Knowles on strain energy decay rates, which are related to a quantitative expression of Saint-Venant's principle. It is mentioned that a mathematically complete set of global functions can be generated, so that any arbitrary interface condition between the finite element and global subregions can be represented. Convergence to the true behavior can be achieved with increasing global functions and finite-element degrees of freedom. Specific attention is devoted to mathematically two-dimensional and three-dimensional prismatic structures. Comments are offered on the GLFEM analysis of NASA flat panel with a discontinuous stiffener. Methods for determining global functions for other effects are also indicated, such as steady-state dynamics and bodies under initial stress.
Sensitivity Analysis of the Gap Heat Transfer Model in BISON.
Swiler, Laura Painton; Schmidt, Rodney C.; Williamson, Richard; Perez, Danielle
2014-10-01
This report summarizes the result of a NEAMS project focused on sensitivity analysis of the heat transfer model in the gap between the fuel rod and the cladding used in the BISON fuel performance code of Idaho National Laboratory. Using the gap heat transfer models in BISON, the sensitivity of the modeling parameters and the associated responses is investigated. The study results in a quantitative assessment of the role of various parameters in the analysis of gap heat transfer in nuclear fuel.
Fixed point sensitivity analysis of interacting structured populations.
Barabás, György; Meszéna, Géza; Ostling, Annette
2014-03-01
Sensitivity analysis of structured populations is a useful tool in population ecology. Historically, methodological development of sensitivity analysis has focused on the sensitivity of eigenvalues in linear matrix models, and on single populations. More recently there have been extensions to the sensitivity of nonlinear models, and to communities of interacting populations. Here we derive a fully general mathematical expression for the sensitivity of equilibrium abundances in communities of interacting structured populations. Our method yields the response of an arbitrary function of the stage class abundances to perturbations of any model parameters. As a demonstration, we apply this sensitivity analysis to a two-species model of ontogenetic niche shift where each species has two stage classes, juveniles and adults. In the context of this model, we demonstrate that our theory is quite robust to violating two of its technical assumptions: the assumption that the community is at a point equilibrium and the assumption of infinitesimally small parameter perturbations. Our results on the sensitivity of a community are also interpreted in a niche theoretical context: we determine how the niche of a structured population is composed of the niches of the individual states, and how the sensitivity of the community depends on niche segregation. PMID:24368160
Global Analysis of Aerosol Properties Above Clouds
NASA Technical Reports Server (NTRS)
Waquet, F.; Peers, F.; Ducos, F.; Goloub, P.; Platnick, S. E.; Riedi, J.; Tanre, D.; Thieuleux, F.
2013-01-01
The seasonal and spatial varability of Aerosol Above Cloud (AAC) properties are derived from passive satellite data for the year 2008. A significant amount of aerosols are transported above liquid water clouds on the global scale. For particles in the fine mode (i.e., radius smaller than 0.3 m), including both clear sky and AAC retrievals increases the global mean aerosol optical thickness by 25(+/- 6%). The two main regions with man-made AAC are the tropical Southeast Atlantic, for biomass burning aerosols, and the North Pacific, mainly for pollutants. Man-made AAC are also detected over the Arctic during the spring. Mineral dust particles are detected above clouds within the so-called dust belt region (5-40 N). AAC may cause a warming effect and bias the retrieval of the cloud properties. This study will then help to better quantify the impacts of aerosols on clouds and climate.
Cacuci, Dan G.; Ionescu-Bujor, Mihaela
2004-07-15
Part II of this review paper highlights the salient features of the most popular statistical methods currently used for local and global sensitivity and uncertainty analysis of both large-scale computational models and indirect experimental measurements. These statistical procedures represent sampling-based methods (random sampling, stratified importance sampling, and Latin Hypercube sampling), first- and second-order reliability algorithms (FORM and SORM, respectively), variance-based methods (correlation ratio-based methods, the Fourier Amplitude Sensitivity Test, and the Sobol Method), and screening design methods (classical one-at-a-time experiments, global one-at-a-time design methods, systematic fractional replicate designs, and sequential bifurcation designs). It is emphasized that all statistical uncertainty and sensitivity analysis procedures first commence with the 'uncertainty analysis' stage and only subsequently proceed to the 'sensitivity analysis' stage; this path is the exact reverse of the conceptual path underlying the methods of deterministic sensitivity and uncertainty analysis where the sensitivities are determined prior to using them for uncertainty analysis. By comparison to deterministic methods, statistical methods for uncertainty and sensitivity analysis are relatively easier to develop and use but cannot yield exact values of the local sensitivities. Furthermore, current statistical methods have two major inherent drawbacks as follows: 1. Since many thousands of simulations are needed to obtain reliable results, statistical methods are at best expensive (for small systems) or, at worst, impracticable (e.g., for large time-dependent systems).2. Since the response sensitivities and parameter uncertainties are inherently and inseparably amalgamated in the results produced by these methods, improvements in parameter uncertainties cannot be directly propagated to improve response uncertainties; rather, the entire set of simulations and
Advancing sensitivity analysis to precisely characterize temporal parameter dominance
NASA Astrophysics Data System (ADS)
Guse, Björn; Pfannerstill, Matthias; Strauch, Michael; Reusser, Dominik; Lüdtke, Stefan; Volk, Martin; Gupta, Hoshin; Fohrer, Nicola
2016-04-01
Parameter sensitivity analysis is a strategy for detecting dominant model parameters. A temporal sensitivity analysis calculates daily sensitivities of model parameters. This allows a precise characterization of temporal patterns of parameter dominance and an identification of the related discharge conditions. To achieve this goal, the diagnostic information as derived from the temporal parameter sensitivity is advanced by including discharge information in three steps. In a first step, the temporal dynamics are analyzed by means of daily time series of parameter sensitivities. As sensitivity analysis method, we used the Fourier Amplitude Sensitivity Test (FAST) applied directly onto the modelled discharge. Next, the daily sensitivities are analyzed in combination with the flow duration curve (FDC). Through this step, we determine whether high sensitivities of model parameters are related to specific discharges. Finally, parameter sensitivities are separately analyzed for five segments of the FDC and presented as monthly averaged sensitivities. In this way, seasonal patterns of dominant model parameter are provided for each FDC segment. For this methodical approach, we used two contrasting catchments (upland and lowland catchment) to illustrate how parameter dominances change seasonally in different catchments. For all of the FDC segments, the groundwater parameters are dominant in the lowland catchment, while in the upland catchment the controlling parameters change seasonally between parameters from different runoff components. The three methodical steps lead to clear temporal patterns, which represent the typical characteristics of the study catchments. Our methodical approach thus provides a clear idea of how the hydrological dynamics are controlled by model parameters for certain discharge magnitudes during the year. Overall, these three methodical steps precisely characterize model parameters and improve the understanding of process dynamics in hydrological
NASA Astrophysics Data System (ADS)
Stein, O.; Schultz, M. G.; Bouarar, I.; Clark, H.; Katragkou, E.; Leitao, J.; Heil, A.
2012-04-01
The EU projects MACC (Monitoring Atmospheric Composition and Climate, 2009-2011) and MACC-II (2011-2014) prepare for the operational Global Monitoring for Environment and Security (GMES) atmospheric core service which is envisaged to start in 2014. Besides global service lines for greenhouse gases and aerosols, emphasis is put also on global monitoring and forecasting of reactive gases. The MACC reanalysis and forecast simulations benefit from the multi-sensor approach for data assimilation of ozone, CO and NO2 observations. Currently the Integrated Forecast System (IFS) of the European Centre for Medium-range Weather Forecasts (ECMWF) is coupled to the chemical transport model MOZART-3 to represent in detail the chemical conversion as well as major source and sink processes. A global emission inventory for reactive gases has been developed as part of the MACC project. Based upon the ACCMIP emissions for the year 2000 these emissions are extrapolated for years after 2000 with the Representative Concentration Pathway RCP8.5 scenario and extended for VOCs and several other species. This inventory composes the MACCity anthropogenic emission inventory (Granier et al. 2011). During the MACC project it became apparent that using the MACCity emissions in reanalysis simulations for recent years led to an underestimation of CO concentrations in the Northern Hemisphere when compared to independent observations. In order to give insight into the reasons for this behavior we conducted MOZART offline simulations for the year 2008 to test the sensitivity of the chemical transport model to the varying emissions. Therefore we ran MOZART with different sets of emissions: 1. MACCity emissions, 2. The GEMS/RETRO emission inventory, 3. MACCity emissions, but with increased traffic CO emissions. While using the emission inventory developed in the RETRO and GEMS projects gives quite reasonable tropospheric concentrations for the key species, the MACCity emissions are too low
Probabilistic methods for sensitivity analysis and calibration in the NASA challenge problem
Safta, Cosmin; Sargsyan, Khachik; Najm, Habib N.; Chowdhary, Kenny; Debusschere, Bert; Swiler, Laura P.; Eldred, Michael S.
2015-01-01
In this study, a series of algorithms are proposed to address the problems in the NASA Langley Research Center Multidisciplinary Uncertainty Quantification Challenge. A Bayesian approach is employed to characterize and calibrate the epistemic parameters based on the available data, whereas a variance-based global sensitivity analysis is used to rank the epistemic and aleatory model parameters. A nested sampling of the aleatory–epistemic space is proposed to propagate uncertainties from model parameters to output quantities of interest.
Zhang, Ning; Liu, Yangang; Gao, Zhiqiu; Li, Dan
2015-04-27
The critical bulk Richardson number (Ri_{cr}) is an important parameter in planetary boundary layer (PBL) parameterization schemes used in many climate models. This paper examines the sensitivity of a Global Climate Model, the Beijing Climate Center Atmospheric General Circulation Model, BCC_AGCM to Ri_{cr}. The results show that the simulated global average of PBL height increases nearly linearly with Ri_{cr}, with a change of about 114 m for a change of 0.5 in Ri_{cr}. The surface sensible (latent) heat flux decreases (increases) as Ri_{cr} increases. The influence of Ri_{cr} on surface air temperature and specific humidity is not significant. The increasing Ri_{cr} may affect the location of the Westerly Belt in the Southern Hemisphere. Further diagnosis reveals that changes in Ri_{cr} affect stratiform and convective precipitations differently. Increasing Ri_{cr} leads to an increase in the stratiform precipitation but a decrease in the convective precipitation. Significant changes of convective precipitation occur over the inter-tropical convergence zone, while changes of stratiform precipitation mostly appear over arid land such as North Africa and Middle East.
NASA Astrophysics Data System (ADS)
Storto, Andrea; Yang, Chunxue; Masina, Simona
2016-05-01
The global ocean heat content evolution is a key component of the Earth's energy budget and can be consistently determined by ocean reanalyses that assimilate hydrographic profiles. This work investigates the impact of the atmospheric reanalysis forcing through a multiforcing ensemble ocean reanalysis, where the ensemble members are forced by five state-of-the-art atmospheric reanalyses during the meteorological satellite era (1979-2013). Data assimilation leads the ensemble to converge toward robust estimates of ocean warming rates and significantly reduces the spread (1.48 ± 0.18 W/m2, per unit area of the World Ocean); hence, the impact of the atmospheric forcing appears only marginal for the global heat content estimates in both upper and deeper oceans. A sensitivity assessment performed through realistic perturbation of the main sources of uncertainty in ocean reanalyses highlights that bias correction and preprocessing of in situ observations represent the most crucial component of the reanalysis, whose perturbation accounts for up to 60% of the ocean heat content anomaly variability in the pre-Argo period. Although these results may depend on the single reanalysis system used, they reveal useful information for the ocean observation community and for the optimal generation of perturbations in ocean ensemble systems.
The sensitivity of global climate to the episodicity of fire aerosol emissions
NASA Astrophysics Data System (ADS)
Clark, Spencer K.; Ward, Daniel S.; Mahowald, Natalie M.
2015-11-01
Here we explore the sensitivity of the global radiative forcing and climate response to the episodicity of fire emissions. We compare the standard approach used in present day and future climate modeling studies, in which emissions are not episodic but smoothly interpolated between monthly mean values and that contrast to the response when fires are represented using a range of approximations of episodicity. The range includes cases with episodicity levels matching observed fire day and fire event counts, as well as cases with extreme episodicity. We compare the different emissions schemes in a set of Community Atmosphere Model (CAM5) simulations forced with reanalysis meteorology and a set of simulations with online dynamics designed to calculate aerosol indirect effect radiative forcings. We find that using climatologically observed fire frequency improves model estimates of cloud properties over the standard scheme, particularly in boreal regions, when both are compared to a simulation with meteorologically synchronized emissions. Using these emissions schemes leads to a range in global indirect effect radiative forcing of fire aerosols between -1.1 and -1.3 W m-2. In cases with extreme episodicity, we see increased transport of aerosols vertically, leading to longer lifetimes and less negative indirect effect radiative forcings. In general, the range in climate impacts that results from the different realistic fire emissions schemes is smaller than the uncertainty in climate impacts due to other aspects of modeling fire emissions.
Zhang, Ning; Liu, Yangang; Gao, Zhiqiu; Li, Dan
2015-04-27
The critical bulk Richardson number (Ricr) is an important parameter in planetary boundary layer (PBL) parameterization schemes used in many climate models. This paper examines the sensitivity of a Global Climate Model, the Beijing Climate Center Atmospheric General Circulation Model, BCC_AGCM to Ricr. The results show that the simulated global average of PBL height increases nearly linearly with Ricr, with a change of about 114 m for a change of 0.5 in Ricr. The surface sensible (latent) heat flux decreases (increases) as Ricr increases. The influence of Ricr on surface air temperature and specific humidity is not significant. The increasingmore » Ricr may affect the location of the Westerly Belt in the Southern Hemisphere. Further diagnosis reveals that changes in Ricr affect stratiform and convective precipitations differently. Increasing Ricr leads to an increase in the stratiform precipitation but a decrease in the convective precipitation. Significant changes of convective precipitation occur over the inter-tropical convergence zone, while changes of stratiform precipitation mostly appear over arid land such as North Africa and Middle East.« less
Design sensitivity analysis using EAL. Part 1: Conventional design parameters
NASA Technical Reports Server (NTRS)
Dopker, B.; Choi, Kyung K.; Lee, J.
1986-01-01
A numerical implementation of design sensitivity analysis of builtup structures is presented, using the versatility and convenience of an existing finite element structural analysis code and its database management system. The finite element code used in the implemenatation presented is the Engineering Analysis Language (EAL), which is based on a hybrid method of analysis. It was shown that design sensitivity computations can be carried out using the database management system of EAL, without writing a separate program and a separate database. Conventional (sizing) design parameters such as cross-sectional area of beams or thickness of plates and plane elastic solid components are considered. Compliance, displacement, and stress functionals are considered as performance criteria. The method presented is being extended to implement shape design sensitivity analysis using a domain method and a design component method.
NASA Astrophysics Data System (ADS)
Hollingsworth, J. L.; Young, R. E.; Schubert, G.; Covey, C.; Grossman, A. S.
2007-03-01
A 3D global circulation model is adapted to the atmosphere of Venus to explore the nature of the planet's atmospheric superrotation. The model employs the full meteorological primitive equations and simplified forms for diabatic and other nonconservative forcings. It is therefore economical for performing very long simulations. To assess circulation equilibration and the occurrence of atmospheric superrotation, the climate model is run for 10,000-20,000 day integrations at 4° × 5° latitude-longitude horizontal resolution, and 56 vertical levels (denoted L56). The sensitivity of these simulations to imposed Venus-like diabatic heating rates, momentum dissipation rates, and various other key parameters (e.g., near-surface momentum drag), in addition to model configuration (e.g., low versus high vertical domain and number of atmospheric levels), is examined. We find equatorial superrotation in several of our numerical experiments, but the magnitude of superrotation is often less than observed. Further, the meridional structure of the mean zonal overturning (i.e., Hadley circulation) can consist of numerous cells which are symmetric about the equator and whose depth scale appears sensitive to the number of vertical layers imposed in the model atmosphere. We find that when realistic diabatic heating is imposed in the lowest several scales heights, only extremely weak atmospheric superrotation results.
ERIC Educational Resources Information Center
Clayton, Thomas
2004-01-01
In recent years, many scholars have become fascinated by a contemporary, multidimensional process that has come to be known as "globalization." Globalization originally described economic developments at the world level. More specifically, scholars invoked the concept in reference to the process of global economic integration and the seemingly…
Sobol‧'s sensitivity analysis for a distributed hydrological model of Yichun River Basin, China
NASA Astrophysics Data System (ADS)
Zhang, Chi; Chu, Jinggang; Fu, Guangtao
2013-02-01
SummaryThis paper aims to provide an enhanced understanding of the parameter sensitivities of the Soil and Water Assessment Tool (SWAT) using a variance-based global sensitivity analysis, i.e., Sobol''s method. The Yichun River Basin, China, is used as a case study, and the sensitivity of the SWAT parameters is analyzed under typical dry, normal and wet years, respectively. To reduce the number of model parameters, some spatial model parameters are grouped in terms of data availability and multipliers are then applied to parameter groups, reflecting spatial variation in the distributed SWAT model. The SWAT model performance is represented using two statistical metrics - Root Mean Square Error (RMSE) and Nash-Sutcliffe Efficiency (NSE) and two hydrological metrics - RunOff Coefficient Error (ROCE) and Slope of the Flow Duration Curve Error (SFDCE). The analysis reveals the individual effects of each parameter and its interactions with other parameters. Parameter interactions contribute to a significant portion of the variation in all metrics considered under moderate and wet years. In particular, the variation in the two hydrological metrics is dominated by the interactions, illustrating the necessity of choosing a global sensitivity analysis method that is able to consider interactions in the SWAT model identification process. In the dry year, however, the individual effects control the variation in the other three metrics except SFDCE. Further, the two statistical metrics fail to identify the SWAT parameters that control the flashiness (i.e., variability of mid-flows) and overall water balance. Overall, the results obtained from the global sensitivity analysis provide an in-depth understanding of the underlying hydrological processes under different metrics and climatic conditions in the case study catchment.
NASA Astrophysics Data System (ADS)
Zhao, J.; Tiede, C.
2011-05-01
An implementation of uncertainty analysis (UA) and quantitative global sensitivity analysis (SA) is applied to the non-linear inversion of gravity changes and three-dimensional displacement data which were measured in and active volcanic area. A didactic example is included to illustrate the computational procedure. The main emphasis is placed on the problem of extended Fourier amplitude sensitivity test (E-FAST). This method produces the total sensitivity indices (TSIs), so that all interactions between the unknown input parameters are taken into account. The possible correlations between the output an the input parameters can be evaluated by uncertainty analysis. Uncertainty analysis results indicate the general fit between the physical model and the measurements. Results of the sensitivity analysis show quite different sensitivities for the measured changes as they relate to the unknown parameters of a physical model for an elastic-gravitational source. Assuming a fixed number of executions, thirty different seeds are observed to determine the stability of this method.
Global Gene Expression Analysis for the Assessment of Nanobiomaterials.
Hanagata, Nobutaka
2015-01-01
Using global gene expression analysis, the effects of biomaterials and nanomaterials can be analyzed at the genetic level. Even though information obtained from global gene expression analysis can be useful for the evaluation and design of biomaterials and nanomaterials, its use for these purposes is not widespread. This is due to the difficulties involved in data analysis. Because the expression data of about 20,000 genes can be obtained at once with global gene expression analysis, the data must be analyzed using bioinformatics. A method of bioinformatic analysis called gene ontology can estimate the kinds of changes on cell functions caused by genes whose expression level is changed by biomaterials and nanomaterials. Also, by applying a statistical analysis technique called hierarchical clustering to global gene expression data between a variety of biomaterials, the effects of the properties of materials on cell functions can be estimated. In this chapter, these theories of analysis and examples of applications to nanomaterials and biomaterials are described. Furthermore, global microRNA analysis, a method that has gained attention in recent years, and its application to nanomaterials are introduced. PMID:26201278
Variational Methods in Sensitivity Analysis and Optimization for Aerodynamic Applications
NASA Technical Reports Server (NTRS)
Ibrahim, A. H.; Hou, G. J.-W.; Tiwari, S. N. (Principal Investigator)
1996-01-01
Variational methods (VM) sensitivity analysis, which is the continuous alternative to the discrete sensitivity analysis, is employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The determination of the sensitivity derivatives of the performance index or functional entails the coupled solutions of the state and costate equations. As the stable and converged numerical solution of the costate equations with their boundary conditions are a priori unknown, numerical stability analysis is performed on both the state and costate equations. Thereafter, based on the amplification factors obtained by solving the generalized eigenvalue equations, the stability behavior of the costate equations is discussed and compared with the state (Euler) equations. The stability analysis of the costate equations suggests that the converged and stable solution of the costate equation is possible only if the computational domain of the costate equations is transformed to take into account the reverse flow nature of the costate equations. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite
Aeroacoustic sensitivity analysis and optimal aeroacoustic design of turbomachinery blades
NASA Technical Reports Server (NTRS)
Hall, Kenneth C.
1994-01-01
During the first year of the project, we have developed a theoretical analysis - and wrote a computer code based on this analysis - to compute the sensitivity of unsteady aerodynamic loads acting on airfoils in cascades due to small changes in airfoil geometry. The steady and unsteady flow though a cascade of airfoils is computed using the full potential equation. Once the nominal solutions have been computed, one computes the sensitivity. The analysis takes advantage of the fact that LU decomposition is used to compute the nominal steady and unsteady flow fields. If the LU factors are saved, then the computer time required to compute the sensitivity of both the steady and unsteady flows to changes in airfoil geometry is quite small. The results to date are quite encouraging, and may be summarized as follows: (1) The sensitivity procedure has been validated by comparing the results obtained by 'finite difference' techniques, that is, computing the flow using the nominal flow solver for two slightly different airfoils and differencing the results. The 'analytic' solution computed using the method developed under this grant and the finite difference results are found to be in almost perfect agreement. (2) The present sensitivity analysis is computationally much more efficient than finite difference techniques. We found that using a 129 by 33 node computational grid, the present sensitivity analysis can compute the steady flow sensitivity about ten times more efficiently that the finite difference approach. For the unsteady flow problem, the present sensitivity analysis is about two and one-half times as fast as the finite difference approach. We expect that the relative efficiencies will be even larger for the finer grids which will be used to compute high frequency aeroacoustic solutions. Computational results show that the sensitivity analysis is valid for small to moderate sized design perturbations. (3) We found that the sensitivity analysis provided important
Global spatial sensitivity of runoff to subsurface permeability using the active subspace method
NASA Astrophysics Data System (ADS)
Gilbert, James M.; Jefferson, Jennifer L.; Constantine, Paul G.; Maxwell, Reed M.
2016-06-01
Hillslope scale runoff is generated as a result of interacting factors that include water influx rate, surface and subsurface properties, and antecedent saturation. Heterogeneity of these factors affects the existence and characteristics of runoff. This heterogeneity becomes an increasingly relevant consideration as hydrologic models are extended and employed to capture greater detail in runoff generating processes. We investigate the impact of one type of heterogeneity - subsurface permeability - on runoff using the integrated hydrologic model ParFlow. Specifically, we examine the sensitivity of runoff to variation in three-dimensional subsurface permeability fields for scenarios dominated by either Hortonian or Dunnian runoff mechanisms. Ten thousand statistically consistent subsurface permeability fields are parameterized using a truncated Karhunen-Loéve (KL) series and used as inputs to 48-h simulations of integrated surface-subsurface flow in an idealized 'tilted-v' domain. Coefficients of the spatial modes of the KL permeability fields provide the parameter space for analysis using the active subspace method. The analysis shows that for Dunnian-dominated runoff conditions the cumulative runoff volume is sensitive primarily to the first spatial mode, corresponding to permeability values in the center of the three-dimensional model domain. In the Hortonian case, runoff volume is sensitive to multiple smaller-scale spatial modes and the locus of that sensitivity is in the near-surface zone upslope from the domain outlet. Variation in runoff volume resulting from random heterogeneity configurations can be expressed as an approximately univariate function of the active variable, a weighted combination of spatial parameterization coefficients computed through the active subspace method. However, this relationship between the active variable and runoff volume is more well-defined for Dunnian runoff than for the Hortonian scenario.
Malaguerra, Flavio; Chambon, Julie C; Bjerg, Poul L; Scheutz, Charlotte; Binning, Philip J
2011-10-01
A fully kinetic biogeochemical model of sequential reductive dechlorination (SERD) occurring in conjunction with lactate and propionate fermentation, iron reduction, sulfate reduction, and methanogenesis was developed. Production and consumption of molecular hydrogen (H(2)) by microorganisms have been modeled using modified Michaelis-Menten kinetics and has been implemented in the geochemical code PHREEQC. The model have been calibrated using a Shuffled Complex Evolution Metropolis algorithm to observations of chlorinated solvents, organic acids, and H(2) concentrations in laboratory batch experiments of complete trichloroethene (TCE) degradation in natural sediments. Global sensitivity analysis was performed using the Morris method and Sobol sensitivity indices to identify the most influential model parameters. Results show that the sulfate concentration and fermentation kinetics are the most important factors influencing SERD. The sensitivity analysis also suggests that it is not possible to simplify the model description if all system behaviors are to be well described. PMID:21877704
National health expenditures: a global analysis.
Murray, C. J.; Govindaraj, R.; Musgrove, P.
1994-01-01
As part of the background research to the World development report 1993: investing in health, an effort was made to estimate public, private and total expenditures on health for all countries of the world. Estimates could be found for public spending for most countries, but for private expenditure in many fewer countries. Regressions were used to predict the missing values of regional and global estimates. These econometric exercises were also used to relate expenditure to measures of health status. In 1990 the world spent an estimated US$ 1.7 trillion (1.7 x 10(12) on health, or $1.9 trillion (1.9 x 10(12)) in dollars adjusted for higher purchasing power in poorer countries. This amount was about 60% public and 40% private in origin. However, as incomes rise, public health expenditure tends to displace private spending and to account for the increasing share of incomes devoted to health. PMID:7923542
Analysis and visualization of global magnetospheric processes
Winske, D.; Mozer, F.S.; Roth, I.
1998-12-31
This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). The purpose of this project is to develop new computational and visualization tools to analyze particle dynamics in the Earth`s magnetosphere. These tools allow the construction of a global picture of particle fluxes, which requires only a small number of in situ spacecraft measurements as input parameters. The methods developed in this project have led to a better understanding of particle dynamics in the Earth`s magnetotail in the presence of turbulent wave fields. They have also been used to demonstrate how large electromagnetic pulses in the solar wind can interact with the magnetosphere to increase the population of energetic particles and even form new radiation belts.
Water Grabbing analysis at global scale
NASA Astrophysics Data System (ADS)
Rulli, M.; Saviori, A.; D'Odorico, P.
2012-12-01
"Land grabbing" is the acquisition of agricultural land by foreign governments and corporations, a phenomenon that has greatly intensified over the last few years as a result of the increase in food prices and biofuel demand. Land grabbing is inherently associated with an appropriation of freshwater resources that has never been investigated before. Here we provide a global assessment of the total grabbed land and water resources. Using process-based agro-hydrological models we estimate the rates of freshwater grabbing worldwide. We find that this phenomenon is occurring at alarming rates in all continents except Antarctica. The per capita volume of grabbed water often exceeds the water requirements for a balanced diet and would be sufficient to abate malnourishment in the grabbed countries. High rates of water grabbing are often associated with deforestation and the increase in water withdrawals for irrigation.
Quantitative uncertainty and sensitivity analysis of a PWR control rod ejection accident
Pasichnyk, I.; Perin, Y.; Velkov, K.
2013-07-01
The paper describes the results of the quantitative Uncertainty and Sensitivity (U/S) Analysis of a Rod Ejection Accident (REA) which is simulated by the coupled system code ATHLET-QUABOX/CUBBOX applying the GRS tool for U/S analysis SUSA/XSUSA. For the present study, a UOX/MOX mixed core loading based on a generic PWR is modeled. A control rod ejection is calculated for two reactor states: Hot Zero Power (HZP) and 30% of nominal power. The worst cases for the rod ejection are determined by steady-state neutronic simulations taking into account the maximum reactivity insertion in the system and the power peaking factor. For the U/S analysis 378 uncertain parameters are identified and quantified (thermal-hydraulic initial and boundary conditions, input parameters and variations of the two-group cross sections). Results for uncertainty and sensitivity analysis are presented for safety important global and local parameters. (authors)
NASA Astrophysics Data System (ADS)
Li, J.; Duan, Q. Y.; Gong, W.; Ye, A.; Dai, Y.; Miao, C.; Di, Z.; Tong, C.; Sun, Y.
2013-08-01
Proper specification of model parameters is critical to the performance of land surface models (LSMs). Due to high dimensionality and parameter interaction, estimating parameters of an LSM is a challenging task. Sensitivity analysis (SA) is a tool that can screen out the most influential parameters on model outputs. In this study, we conducted parameter screening for six output fluxes for the Common Land Model: sensible heat, latent heat, upward longwave radiation, net radiation, soil temperature and soil moisture. A total of 40 adjustable parameters were considered. Five qualitative SA methods, including local, sum-of-trees, multivariate adaptive regression splines, delta test and Morris methods, were compared. The proper sampling design and sufficient sample size necessary to effectively screen out the sensitive parameters were examined. We found that there are 2-8 sensitive parameters, depending on the output type, and about 400 samples are adequate to reliably identify the most sensitive parameters. We also employed a revised Sobol' sensitivity method to quantify the importance of all parameters. The total effects of the parameters were used to assess the contribution of each parameter to the total variances of the model outputs. The results confirmed that global SA methods can generally identify the most sensitive parameters effectively, while local SA methods result in type I errors (i.e., sensitive parameters labeled as insensitive) or type II errors (i.e., insensitive parameters labeled as sensitive). Finally, we evaluated and confirmed the screening results for their consistency with the physical interpretation of the model parameters.
NASA Astrophysics Data System (ADS)
Li, J. D.; Duan, Q. Y.; Gong, W.; Ye, A. Z.; Dai, Y. J.; Miao, C. Y.; Di, Z. H.; Tong, C.; Sun, Y. W.
2013-02-01
Proper specification of model parameters is critical to the performance of land surface models (LSMs). Due to high dimensionality and parameter interaction, estimating parameters of a LSM is a challenging task. Sensitivity analysis (SA) is a tool that can screen out the most influential parameters on model outputs. In this study, we conducted parameter screening for six output fluxes for the Common Land Model: sensible heat, latent heat, upward longwave radiation, net radiation, soil temperature and soil moisture. A total of 40 adjustable parameters were considered. Five qualitative SA methods, including local, sum-of-trees, multivariate adaptive regression splines, delta test and Morris methods, were compared. The proper sampling design and sufficient sample size necessary to effectively screen out the sensitive parameters were examined. We found that there are 2-8 sensitive parameters, depending on the output type, and about 400 samples are adequate to reliably identify the most sensitive parameters. We also employed a revised Sobol' sensitivity method to quantify the importance of all parameters. The total effects of the parameters were used to assess the contribution of each parameter to the total variances of the model outputs. The results confirmed that global SA methods can generally identify the most sensitive parameters effectively, while local SA methods result in type I errors (i.e. sensitive parameters labeled as insensitive) or type II errors (i.e. insensitive parameters labeled as sensitive). Finally, we evaluated and confirmed the screening results for their consistence with the physical interpretation of the model parameters.
NASA Astrophysics Data System (ADS)
Daniell, James; Simpson, Alanna; Gunasekara, Rashmin; Baca, Abigail; Schaefer, Andreas; Ishizawa, Oscar; Murnane, Rick; Tijssen, Annegien; Deparday, Vivien; Forni, Marc; Himmelfarb, Anne; Leder, Jan
2015-04-01
-defined exposure and vulnerability. Without this function, many tools can only be used regionally and not at global or continental scale. It is becoming increasingly easy to use multiple packages for a single region and/or hazard to characterize the uncertainty in the risk, or use as checks for the sensitivities in the analysis. There is a potential for valuable synergy between existing software. A number of open source software packages could be combined to generate a multi-risk model with multiple views of a hazard. This extensive review has simply attempted to provide a platform for dialogue between all open source and open access software packages and to hopefully inspire collaboration between developers, given the great work done by all open access and open source developers.
Sensitivity Analysis of the Integrated Medical Model for ISS Programs
NASA Technical Reports Server (NTRS)
Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.
2016-01-01
Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral
Li, Peiyue; Qian, Hui; Wu, Jianhua; Chen, Jie
2013-03-01
Sensitivity analysis is becoming increasingly widespread in many fields of engineering and sciences and has become a necessary step to verify the feasibility and reliability of a model or a method. The sensitivity of the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) method in water quality assessment mainly includes sensitivity to the parameter weights and sensitivity to the index input data. In the present study, the sensitivity of TOPSIS to the parameter weights was discussed in detail. The present study assumed the original parameter weights to be equal to each other, and then each weight was changed separately to see how the assessment results would be affected. Fourteen schemes were designed to investigate the sensitivity to the variation of each weight. The variation ranges that keep the assessment results unchangeable were also derived theoretically. The results show that the final assessment results will change when the weights increase or decrease by ±20 to ±50 %. The feedback of different samples to the variation of a given weight is different, and the feedback of a given sample to the variation of different weights is also different. The final assessment results can keep relatively stable when a given weight is disturbed as long as the initial variation ratios meet one of the eight derived requirements. PMID:22752962
NASA Technical Reports Server (NTRS)
Adler, Robert F.; Huffman, George; Curtis, Scott; Bolvin, David; Nelkin, Eric; Einaudi, Franco (Technical Monitor)
2001-01-01
The 22 year, monthly, globally complete precipitation analysis of the World Climate Research Program's (WCRP/GEWEX) Global Precipitation Climatology Project (GPCP) and the four year (1997-present) daily GPCP analysis are described in terms of the data sets and analysis techniques used in their preparation. These analyses are then used to study global and regional variations and trends during the 22 years and the shorter-time scale events that constitute those variations. The GPCP monthly data set shows no significant trend in global precipitation over the twenty years, unlike the positive trend in global surface temperatures over the past century. The global trend analysis must be interpreted carefully, however, because the inhomogeneity of the data set makes detecting a small signal very difficult, especially over this relatively short period. The relation of global (and tropical) total precipitation and ENSO (El Nino and Southern Oscillation) events is quantified with no significant signal when land and ocean are combined. In terms of regional trends 1979 to 2000 the tropics have a distribution of regional rainfall trends that has an ENSO-like pattern with features of both the El Nino and La Nina. This feature is related to a possible trend in the frequency of ENSO events (either El Nino or La Nina) over the past 20 years. Monthly anomalies of precipitation are related to ENSO variations with clear signals extending into middle and high latitudes of both hemispheres. The El Nino and La Nina mean anomalies are near mirror images of each other and when combined produce an ENSO signal with significant spatial continuity over large distances. A number of the features are shown to extend into high latitudes. Positive anomalies extend in the Southern Hemisphere from the Pacific southeastward across Chile and Argentina into the south Atlantic Ocean. In the Northern Hemisphere the counterpart feature extends across the southern U.S. and Atlantic Ocean into Europe. In the
A new strategy for sensitivity analysis when modelling extreme events in the geosciences
NASA Astrophysics Data System (ADS)
Pianosi, Francesca; Wagener, Thorsten
2014-05-01
Natural hazard models - used to predict and evaluate extreme events like prolonged droughts, floods, windstorms, etc. - are affected by unavoidable and potentially large uncertainty. Uncertainty sources are manifold, including simplifying assumptions in the model structure (e.g. coarse spatial resolution), uncertain parameter values, measurement errors, etc. Global Sensitivity Analysis (GSA) can be used to assess the relative contributions from these different sources to the uncertainty in the model predictions. By providing insights into the model behavior and potential for simplification, GSA indicates where further data collection and research is needed or would be beneficial, and enhances the credibility of the modelling results. In this work we present a novel Regional-Global approach for Sensitivity Analysis. The method is "global" in the model inputs and "regional" in the output, that is, it considers variations of the uncertain inputs across their entire feasibility range but can be focused on their effects on a specific region of the model response, e.g. extreme values. The method is therefore especially promising for natural hazard applications where the focus in on the effect of uncertain inputs on specific range of values of the model output. The main underlying idea is to measure sensitivity by the distance between the unconditional distribution of the model output (i.e. when all input factors vary) and the conditional distribution when one of the input factors is fixed. Such sensitivity measures can be computed either over the entire range of the output distribution or tuned to consider only a sub-range, for instance the tail of the distribution. We use several natural hazards examples to demonstrate the approach and compare it to other widely applied GSA methods like Sobol and Regional Sensitivity Analysis.
Geostationary Coastal and Air Pollution Events (GEO-CAPE) Sensitivity Analysis Experiment
NASA Technical Reports Server (NTRS)
Lee, Meemong; Bowman, Kevin
2014-01-01
Geostationary Coastal and Air pollution Events (GEO-CAPE) is a NASA decadal survey mission to be designed to provide surface reflectance at high spectral, spatial, and temporal resolutions from a geostationary orbit necessary for studying regional-scale air quality issues and their impact on global atmospheric composition processes. GEO-CAPE's Atmospheric Science Questions explore the influence of both gases and particles on air quality, atmospheric composition, and climate. The objective of the GEO-CAPE Observing System Simulation Experiment (OSSE) is to analyze the sensitivity of ozone to the global and regional NOx emissions and improve the science impact of GEO-CAPE with respect to the global air quality. The GEO-CAPE OSSE team at Jet propulsion Laboratory has developed a comprehensive OSSE framework that can perform adjoint-sensitivity analysis for a wide range of observation scenarios and measurement qualities. This report discusses the OSSE framework and presents the sensitivity analysis results obtained from the GEO-CAPE OSSE framework for seven observation scenarios and three instrument systems.
Sensitivity of tropospheric hydrogen peroxide to global chemical and climate change
Thompson, A.M.; Stewart, R.W. ); Owens, M.A. )
1989-01-01
The sensitivities of tropospheric (H{sub 2}O{sub 2}) levels to increases in the CH{sub 4}, CO and NO emissions and to changes in stratospheric O{sub 3} and tropospheric O{sub 3} and H{sub 2}O have been evaluated with a one-dimensional photochemical model. Specific scenarios of CH{sub 4}-CO-NO{sub x} emissions and global climate changes are used to predict HO{sub 2} and H{sub 2}O{sub 2} changes between 1980 and 2030. Calculations are made for urban and nonurban continental conditions and for low latitudes. Generally, CO and CH{sub 4} emissions will suppress H{sub 2}O{sub 2} except in very low No{sub x} regions will suppress H{sub 2}O{sub 2} except in very low No{sub x} regions. A global warming (with increased H{sub 2}O vapor) or stratospheric O{sub 3} depletion will add to H{sub 2}O{sub 2}. Hydrogen peroxide increases from 1980 to 2030 could be 100% or more in the urban boundary layer. Increases in CH{sub 4}, CO and O{sub 3} that have occurred in the industrial era (since 1800) have probably produced temporal increases in background HO{sub 2} and H{sub 2}O{sub 2}. It might be possible to use H{sub 2}O{sub 2} in ice cores to track these changes. Where formation of sulfuric acid in cloudwater and precipitation is oxidant limited, H{sub 2}O{sub 2} and HO{sub 2} increases could be contributing to increases in acid precipitation.
An estimate of equilibrium sensitivity of global terrestrial carbon cycle using NCAR CCSM4
NASA Astrophysics Data System (ADS)
Bala, G.; Krishna, Sujith; Narayanappa, Devaraju; Cao, Long; Caldeira, Ken; Nemani, Ramakrishna
2013-04-01
Increasing concentrations of atmospheric CO2 influence climate, terrestrial biosphere productivity and ecosystem carbon storage through its radiative, physiological and fertilization effects. In this paper, we quantify these effects for a doubling of CO2 using a low resolution configuration of the coupled model NCAR CCSM4. In contrast to previous coupled climate-carbon modeling studies, we focus on the near-equilibrium response of the terrestrial carbon cycle. For a doubling of CO2, the radiative effect on the physical climate system causes global mean surface air temperature to increase by 2.14 K, whereas the physiological and fertilization on the land biosphere effects cause a warming of 0.22 K, suggesting that these later effects increase global warming by about 10 % as found in many recent studies. The CO2-fertilization leads to total ecosystem carbon gain of 371 Gt-C (28 %) while the radiative effect causes a loss of 131 Gt-C (~10 %) indicating that climate warming damps the fertilization-induced carbon uptake over land. Our model-based estimate for the maximum potential terrestrial carbon uptake resulting from a doubling of atmospheric CO2 concentration (285-570 ppm) is only 242 Gt-C. This highlights the limited storage capacity of the terrestrial carbon reservoir. We also find that the terrestrial carbon storage sensitivity to changes in CO2 and temperature have been estimated to be lower in previous transient simulations because of lags in the climate-carbon system. Our model simulations indicate that the time scale of terrestrial carbon cycle response is greater than 500 years for CO2-fertilization and about 200 years for temperature perturbations. We also find that dynamic changes in vegetation amplify the terrestrial carbon storage sensitivity relative to a static vegetation case: because of changes in tree cover, changes in total ecosystem carbon for CO2-direct and climate effects are amplified by 88 and 72 %, respectively, in simulations with dynamic
Sensitivity analysis technique for application to deterministic models
Ishigami, T.; Cazzoli, E.; Khatib-Rahbar, M.; Unwin, S.D.
1987-01-01
The characterization of sever accident source terms for light water reactors should include consideration of uncertainties. An important element of any uncertainty analysis is an evaluation of the sensitivity of the output probability distributions reflecting source term uncertainties to assumptions regarding the input probability distributions. Historically, response surface methods (RSMs) were developed to replace physical models using, for example, regression techniques, with simplified models for example, regression techniques, with simplified models for extensive calculations. The purpose of this paper is to present a new method for sensitivity analysis that does not utilize RSM, but instead relies directly on the results obtained from the original computer code calculations. The merits of this approach are demonstrated by application of the proposed method to the suppression pool aerosol removal code (SPARC), and the results are compared with those obtained by sensitivity analysis with (a) the code itself, (b) a regression model, and (c) Iman's method.
Global kinetic analysis of seeded BSA aggregation.
Sahin, Ziya; Demir, Yusuf Kemal; Kayser, Veysel
2016-04-30
Accelerated aggregation studies were conducted around the melting temperature (Tm) to elucidate the kinetics of seeded BSA aggregation. Aggregation was tracked by SEC-HPLC and intrinsic fluorescence spectroscopy. Time evolution of monomer, dimer and soluble aggregate concentrations were globally analysed to reliably deduce mechanistic details pertinent to the process. Results showed that BSA aggregated irreversibly through both sequential monomer addition and aggregate-aggregate interactions. Sequential monomer addition proceeded only via non-native monomers, starting to occur only by 1-2°C below the Tm. Aggregate-aggregate interactions were the dominant mechanism below the Tm due to an initial presence of small aggregates that acted as seeds. Aggregate-aggregate interactions were significant also above the Tm, particularly at later stages of aggregation when sequential monomer addition seemed to cease, leading in some cases to insoluble aggregate formation. The adherence (or non-thereof) of the mechanisms to Arrhenius kinetics were discussed alongside possible implications of seeding for biopharmaceutical shelf-life and spectroscopic data interpretation, the latter of which was found to often be overlooked in BSA aggregation studies. PMID:26970282
The resolution sensitivity of the South Asian monsoon and Indo-Pacific in a global 0.35° AGCM
NASA Astrophysics Data System (ADS)
Johnson, Stephanie J.; Levine, Richard C.; Turner, Andrew G.; Martin, Gill M.; Woolnough, Steven J.; Schiemann, Reinhard; Mizielinski, Matthew S.; Roberts, Malcolm J.; Vidale, Pier Luigi; Demory, Marie-Estelle; Strachan, Jane
2016-02-01
The South Asian monsoon is one of the most significant manifestations of the seasonal cycle. It directly impacts nearly one third of the world's population and also has substantial global influence. Using 27-year integrations of a high-resolution atmospheric general circulation model (Met Office Unified Model), we study changes in South Asian monsoon precipitation and circulation when horizontal resolution is increased from approximately 200-40 km at the equator (N96-N512, 1.9°-0.35°). The high resolution, integration length and ensemble size of the dataset make this the most extensive dataset used to evaluate the resolution sensitivity of the South Asian monsoon to date. We find a consistent pattern of JJAS precipitation and circulation changes as resolution increases, which include a slight increase in precipitation over peninsular India, changes in Indian and Indochinese orographic rain bands, increasing wind speeds in the Somali Jet, increasing precipitation over the Maritime Continent islands and decreasing precipitation over the northern Maritime Continent seas. To diagnose which resolution-related processes cause these changes, we compare them to published sensitivity experiments that change regional orography and coastlines. Our analysis indicates that improved resolution of the East African Highlands results in the improved representation of the Somali Jet and further suggests that improved resolution of orography over Indochina and the Maritime Continent results in more precipitation over the Maritime Continent islands at the expense of reduced precipitation further north. We also evaluate the resolution sensitivity of monsoon depressions and lows, which contribute more precipitation over northeast India at higher resolution. We conclude that while increasing resolution at these scales does not solve the many monsoon biases that exist in GCMs, it has a number of small, beneficial impacts.
Sensitivity analysis for missing data in regulatory submissions.
Permutt, Thomas
2016-07-30
The National Research Council Panel on Handling Missing Data in Clinical Trials recommended that sensitivity analyses have to be part of the primary reporting of findings from clinical trials. Their specific recommendations, however, seem not to have been taken up rapidly by sponsors of regulatory submissions. The NRC report's detailed suggestions are along rather different lines than what has been called sensitivity analysis in the regulatory setting up to now. Furthermore, the role of sensitivity analysis in regulatory decision-making, although discussed briefly in the NRC report, remains unclear. This paper will examine previous ideas of sensitivity analysis with a view to explaining how the NRC panel's recommendations are different and possibly better suited to coping with present problems of missing data in the regulatory setting. It will also discuss, in more detail than the NRC report, the relevance of sensitivity analysis to decision-making, both for applicants and for regulators. Published 2015. This article is a U.S. Government work and is in the public domain in the USA. PMID:26567763
New Methods for Sensitivity Analysis in Chaotic, Turbulent Fluid Flows
NASA Astrophysics Data System (ADS)
Blonigan, Patrick; Wang, Qiqi
2012-11-01
Computational methods for sensitivity analysis are invaluable tools for fluid mechanics research and engineering design. These methods are used in many applications, including aerodynamic shape optimization and adaptive grid refinement. However, traditional sensitivity analysis methods break down when applied to long-time averaged quantities in chaotic fluid flowfields, such as those obtained using high-fidelity turbulence simulations. Also, a number of dynamical properties of chaotic fluid flows, most notably the ``Butterfly Effect,'' make the formulation of new sensitivity analysis methods difficult. This talk will outline two chaotic sensitivity analysis methods. The first method, the Fokker-Planck adjoint method, forms a probability density function on the strange attractor associated with the system and uses its adjoint to find gradients. The second method, the Least Squares Sensitivity method, finds some ``shadow trajectory'' in phase space for which perturbations do not grow exponentially. This method is formulated as a quadratic programing problem with linear constraints. This talk is concluded with demonstrations of these new methods on some example problems, including the Lorenz attractor and flow around an airfoil at a high angle of attack.
Imaging system sensitivity analysis with NV-IPM
NASA Astrophysics Data System (ADS)
Fanning, Jonathan; Teaney, Brian
2014-05-01
This paper describes the sensitivity analysis capabilities to be added to version 1.2 of the NVESD imaging sensor model NV-IPM. Imaging system design always involves tradeoffs to design the best system possible within size, weight, and cost constraints. In general, the performance of a well designed system will be limited by the largest, heaviest, and most expensive components. Modeling is used to analyze system designs before the system is built. Traditionally, NVESD models were only used to determine the performance of a given system design. NV-IPM has the added ability to automatically determine the sensitivity of any system output to changes in the system parameters. The component-based structure of NV-IPM tracks the dependence between outputs and inputs such that only the relevant parameters are varied in the sensitivity analysis. This allows sensitivity analysis of an output such as probability of identification to determine the limiting parameters of the system. Individual components can be optimized by doing sensitivity analysis of outputs such as NETD or SNR. This capability will be demonstrated by analyzing example imaging systems.
NASA Technical Reports Server (NTRS)
Considine, David B.; Connell, Peter S.; Bergmann, Daniel J.; Rotman, Douglas A.; Strahan, Susan E.
2004-01-01
We use the Global Modeling Initiative chemistry and transport model to simulate the evolution of stratospheric ozone between 1995 and 2030, using boundary conditions consistent with the recent World Meteorological Organization ozone assessment. We compare the Antarctic ozone recovery predictions of two simulations, one driven by an annually repeated year of meteorological data from a general circulation model (GCM), the other using a year of output from a data assimilation system (DAS), to examine the sensitivity of Antarctic ozone recovery predictions to the characteristic dynamical differences between GCM- and DAS-generated meteorological data. Although the age of air in the Antarctic lower stratosphere differs by a factor of 2 between the simulations, we find little sensitivity of the 1995-2030 Antarctic ozone recovery between 350 and 650 K to the differing meteorological fields, particularly when the recovery is specified in mixing ratio units. Percent changes are smaller in the DAS-driven simulation compared to the GCM-driven simulation because of a surplus of Antarctic ozone in the DAS-driven simulation which is not consistent with observations. The peak ozone change between 1995 and 2030 in both simulations is approx.20% lower than photochemical expectations, indicating that changes in ozone transport due to changing ozone gradients at 450 K between 1995 and 2030 constitute a small negative feedback. Total winter/spring ozone loss during the base year (1995) of both simulations and the rate of ozone loss during August and September is somewhat weaker than observed. This appears to be due to underestimates of Antarctic Cl(sub y) at the 450 K potential temperature level.
Multiobjective sensitivity analysis and optimization of distributed hydrologic model MOBIDIC
NASA Astrophysics Data System (ADS)
Yang, J.; Castelli, F.; Chen, Y.
2014-10-01
Calibration of distributed hydrologic models usually involves how to deal with the large number of distributed parameters and optimization problems with multiple but often conflicting objectives that arise in a natural fashion. This study presents a multiobjective sensitivity and optimization approach to handle these problems for the MOBIDIC (MOdello di Bilancio Idrologico DIstribuito e Continuo) distributed hydrologic model, which combines two sensitivity analysis techniques (the Morris method and the state-dependent parameter (SDP) method) with multiobjective optimization (MOO) approach ɛ-NSGAII (Non-dominated Sorting Genetic Algorithm-II). This approach was implemented to calibrate MOBIDIC with its application to the Davidson watershed, North Carolina, with three objective functions, i.e., the standardized root mean square error (SRMSE) of logarithmic transformed discharge, the water balance index, and the mean absolute error of the logarithmic transformed flow duration curve, and its results were compared with those of a single objective optimization (SOO) with the traditional Nelder-Mead simplex algorithm used in MOBIDIC by taking the objective function as the Euclidean norm of these three objectives. Results show that (1) the two sensitivity analysis techniques are effective and efficient for determining the sensitive processes and insensitive parameters: surface runoff and evaporation are very sensitive processes to all three objective functions, while groundwater recession and soil hydraulic conductivity are not sensitive and were excluded in the optimization. (2) Both MOO and SOO lead to acceptable simulations; e.g., for MOO, the average Nash-Sutcliffe value is 0.75 in the calibration period and 0.70 in the validation period. (3) Evaporation and surface runoff show similar importance for watershed water balance, while the contribution of baseflow can be ignored. (4) Compared to SOO, which was dependent on the initial starting location, MOO provides more
Sensitivity analysis approach to multibody systems described by natural coordinates
NASA Astrophysics Data System (ADS)
Li, Xiufeng; Wang, Yabin
2014-03-01
The classical natural coordinate modeling method which removes the Euler angles and Euler parameters from the governing equations is particularly suitable for the sensitivity analysis and optimization of multibody systems. However, the formulation has so many principles in choosing the generalized coordinates that it hinders the implementation of modeling automation. A first order direct sensitivity analysis approach to multibody systems formulated with novel natural coordinates is presented. Firstly, a new selection method for natural coordinate is developed. The method introduces 12 coordinates to describe the position and orientation of a spatial object. On the basis of the proposed natural coordinates, rigid constraint conditions, the basic constraint elements as well as the initial conditions for the governing equations are derived. Considering the characteristics of the governing equations, the newly proposed generalized-α integration method is used and the corresponding algorithm flowchart is discussed. The objective function, the detailed analysis process of first order direct sensitivity analysis and related solving strategy are provided based on the previous modeling system. Finally, in order to verify the validity and accuracy of the method presented, the sensitivity analysis of a planar spinner-slider mechanism and a spatial crank-slider mechanism are conducted. The test results agree well with that of the finite difference method, and the maximum absolute deviation of the results is less than 3%. The proposed approach is not only convenient for automatic modeling, but also helpful for the reduction of the complexity of sensitivity analysis, which provides a practical and effective way to obtain sensitivity for the optimization problems of multibody systems.
Global/local stress analysis of composite panels
NASA Technical Reports Server (NTRS)
Ransom, Jonathan B.; Knight, Norman F., Jr.
1989-01-01
A method for performing a global/local stress analysis is described, and its capabilities are demonstrated. The method employs spline interpolation functions which satisfy the linear plate bending equation to determine displacements and rotations from a global model which are used as boundary conditions for the local model. Then, the local model is analyzed independent of the global model of the structure. This approach can be used to determine local, detailed stress states for specific structural regions using independent, refined local models which exploit information from less-refined global models. The method presented is not restricted to having a priori knowledge of the location of the regions requiring local detailed stress analysis. This approach also reduces the computational effort necessary to obtain the detailed stress state. Criteria for applying the method are developed. The effectiveness of the method is demonstrated using a classical stress concentration problem and a graphite-epoxy blade-stiffened panel with a discontinuous stiffener.
Personalization of models with many model parameters: an efficient sensitivity analysis approach.
Donders, W P; Huberts, W; van de Vosse, F N; Delhaas, T
2015-10-01
Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of individual input parameters or their interactions, are considered the gold standard. The variance portions are called the Sobol sensitivity indices and can be estimated by a Monte Carlo (MC) approach (e.g., Saltelli's method [1]) or by employing a metamodel (e.g., the (generalized) polynomial chaos expansion (gPCE) [2, 3]). All these methods require a large number of model evaluations when estimating the Sobol sensitivity indices for models with many parameters [4]. To reduce the computational cost, we introduce a two-step approach. In the first step, a subset of important parameters is identified for each output of interest using the screening method of Morris [5]. In the second step, a quantitative variance-based sensitivity analysis is performed using gPCE. Efficient sampling strategies are introduced to minimize the number of model runs required to obtain the sensitivity indices for models considering multiple outputs. The approach is tested using a model that was developed for predicting post-operative flows after creation of a vascular access for renal failure patients. We compare the sensitivity indices obtained with the novel two-step approach with those obtained from a reference analysis that applies Saltelli's MC method. The two-step approach was found to yield accurate estimates of the sensitivity indices at two orders of magnitude lower computational cost. PMID:26017545
Sensitivity analysis of the fission gas behavior model in BISON.
Swiler, Laura Painton; Pastore, Giovanni; Perez, Danielle; Williamson, Richard
2013-05-01
This report summarizes the result of a NEAMS project focused on sensitivity analysis of a new model for the fission gas behavior (release and swelling) in the BISON fuel performance code of Idaho National Laboratory. Using the new model in BISON, the sensitivity of the calculated fission gas release and swelling to the involved parameters and the associated uncertainties is investigated. The study results in a quantitative assessment of the role of intrinsic uncertainties in the analysis of fission gas behavior in nuclear fuel.
Sensitivity analysis for handling uncertainty in an economic evaluation.
Limwattananon, Supon
2014-05-01
To meet updated international standards, this paper revises the previous Thai guidelines for conducting sensitivity analyses as part of the decision analysis model for health technology assessment. It recommends both deterministic and probabilistic sensitivity analyses to handle uncertainty of the model parameters, which are best represented graphically. Two new methodological issues are introduced-a threshold analysis of medicines' unit prices for fulfilling the National Lists of Essential Medicines' requirements and the expected value of information for delaying decision-making in contexts where there are high levels of uncertainty. Further research is recommended where parameter uncertainty is significant and where the cost of conducting the research is not prohibitive. PMID:24964700
Sensitivity of agro-environmental zones in Spain to global climatic change
NASA Astrophysics Data System (ADS)
Vanwalleghem, T.; Guzmán, G.; Vanderlinden, K.; Laguna, A.; Giraldez, J. V.
2014-12-01
Soil has a key role in the regulation of carbon, water and nutrient cycles. Traditionally, agricultural soil management was oriented towards optimizing productivity. Nowadays, mitigation of climate change effects and maintaining long-term soil quality are evenly important. Developing policy guidelines for best management practices need to be site-specific, given the large spatial variability of environmental conditions within the EU. Therefore, it is necessary to classify the different farming zones that are susceptible to soil degradation. Especially in Mediterranean areas, this variability and its susceptibility to degradation is higher than in other areas of the EU. The objective of this study is therefore to delineate current agro-environmental zones in Spain and to determine the effect of global climate change on this classification in the future. The final objective is to assist policy makers in scenario analysis with respect to soil conservation. Our classification scheme is based on soil, topography and climate (seasonal temperature and rainfall) variables. We calculated slope and elevation based on a SRTM-derived DEM, soil texture was extracted from the European Soil Database and seasonal mean, minimum and maximum precipitation and temperature data were gridded from publically available weather station data (Aemet). Global change scenarios are average downscaled ensemble predictions for the emission scenarios A2 and B2. The k-means method was used for classification of the 10 km x 10 km gridded variables. Using the before-mentioned input variables, the optimal number of agro-environmental zones we obtained is 8. The classification corresponds well with the observed distribution of farming typologies in Spain. The advantage of this method is that it is a simple, objective method which uses only readily available, public data. As such, its extrapolation to other countries of the EU is straightforward. Finally, it presents a tool for policy makers to assess
Efficient sensitivity analysis method for chaotic dynamical systems
NASA Astrophysics Data System (ADS)
Liao, Haitao
2016-05-01
The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.
Adjoint-based sensitivity analysis for reactor-safety applications
Parks, C.V.
1985-01-01
The application and usefulness of an adjoint-based methodology for performing sensitivity analysis on reactor safety computer codes is investigated. The adjoint-based methodology, referred to as differential sensitivity theory (DST), provides first-order derivatives of the calculated quantities of interest (responses) with respect to the input parameters. The basic theoretical development of DST is presented along with the needed general extensions for consideration of model discontinuities and a variety of useful response definitions. A simple analytic problem is used to highlight the general DST procedures. Finally, DST procedures presented in this work are applied to two highly nonlinear reactor accident analysis codes: (1) FASTGAS, a relatively small code for analysis of loss-of-decay-heat-removal accident in a gas-cooled fast reactor, and (2) an existing code called VENUS-II which is typically employed for analyzing the core disassembly phase of a hypothetical fast reactor accident. The two codes are different both in terms of complexity and in terms of the facets of DST which can be illustrated. Sensitivity results from the adjoint codes ADJGAS and VENUS-ADJ are verified with direct recalculations using perturbed input parameters. The effectiveness of the DST results for parameter ranking, prediction of response changes, and uncertainty analysis are illustrated. The conclusion drawn from this study is that DST is a viable, cost-effective methodology for accurate sensitivity analysis.
Bayesian sensitivity analysis of a nonlinear finite element model
NASA Astrophysics Data System (ADS)
Becker, W.; Oakley, J. E.; Surace, C.; Gili, P.; Rowson, J.; Worden, K.
2012-10-01
A major problem in uncertainty and sensitivity analysis is that the computational cost of propagating probabilistic uncertainty through large nonlinear models can be prohibitive when using conventional methods (such as Monte Carlo methods). A powerful solution to this problem is to use an emulator, which is a mathematical representation of the model built from a small set of model runs at specified points in input space. Such emulators are massively cheaper to run and can be used to mimic the "true" model, with the result that uncertainty analysis and sensitivity analysis can be performed for a greatly reduced computational cost. The work here investigates the use of an emulator known as a Gaussian process (GP), which is an advanced probabilistic form of regression. The GP is particularly suited to uncertainty analysis since it is able to emulate a wide class of models, and accounts for its own emulation uncertainty. Additionally, uncertainty and sensitivity measures can be estimated analytically, given certain assumptions. The GP approach is explained in detail here, and a case study of a finite element model of an airship is used to demonstrate the method. It is concluded that the GP is a very attractive way of performing uncertainty and sensitivity analysis on large models, provided that the dimensionality is not too high.
Global processing takes time: A meta-analysis on local-global visual processing in ASD.
Van der Hallen, Ruth; Evers, Kris; Brewaeys, Katrien; Van den Noortgate, Wim; Wagemans, Johan
2015-05-01
What does an individual with autism spectrum disorder (ASD) perceive first: the forest or the trees? In spite of 30 years of research and influential theories like the weak central coherence (WCC) theory and the enhanced perceptual functioning (EPF) account, the interplay of local and global visual processing in ASD remains only partly understood. Research findings vary in indicating a local processing bias or a global processing deficit, and often contradict each other. We have applied a formal meta-analytic approach and combined 56 articles that tested about 1,000 ASD participants and used a wide range of stimuli and tasks to investigate local and global visual processing in ASD. Overall, results show no enhanced local visual processing nor a deficit in global visual processing. Detailed analysis reveals a difference in the temporal pattern of the local-global balance, that is, slow global processing in individuals with ASD. Whereas task-dependent interaction effects are obtained, gender, age, and IQ of either participant groups seem to have no direct influence on performance. Based on the overview of the literature, suggestions are made for future research. PMID:25420221
Adjoint sensitivity analysis of plasmonic structures using the FDTD method.
Zhang, Yu; Ahmed, Osman S; Bakr, Mohamed H
2014-05-15
We present an adjoint variable method for estimating the sensitivities of arbitrary responses with respect to the parameters of dispersive discontinuities in nanoplasmonic devices. Our theory is formulated in terms of the electric field components at the vicinity of perturbed discontinuities. The adjoint sensitivities are computed using at most one extra finite-difference time-domain (FDTD) simulation regardless of the number of parameters. Our approach is illustrated through the sensitivity analysis of an add-drop coupler consisting of a square ring resonator between two parallel waveguides. The computed adjoint sensitivities of the scattering parameters are compared with those obtained using the accurate but computationally expensive central finite difference approach. PMID:24978258
NASA Technical Reports Server (NTRS)
Bowman, Kenneth P.; Sacks, Jerome; Chang, Yue-Fang
1993-01-01
Methods for the design and analysis of numerical experiments that are especially useful and efficient in multidimensional parameter spaces are presented. The analysis method, which is similar to kriging in the spatial analysis literature, fits a statistical model to the output of the numerical model. The method is applied to a fully nonlinear, global, equivalent-barotropic dynamical model. The statistical model also provides estimates for the uncertainty of predicted numerical model output, which can provide guidance on where in the parameter space to conduct further experiments, if necessary. The method can provide significant improvements in the efficiency with which numerical sensitivity experiments are conducted.
Sensitivity analysis for volcanic source modeling quality assessment and model selection
NASA Astrophysics Data System (ADS)
Cannavó, Flavio
2012-07-01
The increasing knowledge and understanding of volcanic sources has led to the development and implementation of sophisticated and complex mathematical models with the main goal of describing field and experimental data. Quantification of the model's ability in describing the data becomes fundamental for a realistic estimate of the model parameters. The analysis of sensitivity can help us in identifying the parameters that significantly affect the model's output and in assessing its quality factor. In this paper, we describe the Global Sensitivity Analysis (GSA) methods based both on Fourier Amplitude Sensitivity Test and on the Sobol' approach and discuss their implementation in a Matlab software tool (GSAT). We also introduce a new criterion for model selection based on sensitivity analysis. The proposed approach is tested and applied to quantify the fitting ability of an analytic volcanic source model on a synthetic deformation data. Results show the validity of the method, against the traditional approaches, in supporting the volcanic model selection and the flexibility of the GSAT software tool in analyzing the model sensitivity.
Sensitivity analysis in a Lassa fever deterministic mathematical model
NASA Astrophysics Data System (ADS)
Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman
2015-05-01
Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.
The Volatility of Data Space: Topology Oriented Sensitivity Analysis
Du, Jing; Ligmann-Zielinska, Arika
2015-01-01
Despite the difference among specific methods, existing Sensitivity Analysis (SA) technologies are all value-based, that is, the uncertainties in the model input and output are quantified as changes of values. This paradigm provides only limited insight into the nature of models and the modeled systems. In addition to the value of data, a potentially richer information about the model lies in the topological difference between pre-model data space and post-model data space. This paper introduces an innovative SA method called Topology Oriented Sensitivity Analysis, which defines sensitivity as the volatility of data space. It extends SA into a deeper level that lies in the topology of data. PMID:26368929
Uncertainty and sensitivity analysis and its applications in OCD measurements
NASA Astrophysics Data System (ADS)
Vagos, Pedro; Hu, Jiangtao; Liu, Zhuan; Rabello, Silvio
2009-03-01
This article describes an Uncertainty & Sensitivity Analysis package, a mathematical tool that can be an effective time-shortcut for optimizing OCD models. By including real system noises in the model, an accurate method for predicting measurements uncertainties is shown. The assessment, in an early stage, of the uncertainties, sensitivities and correlations of the parameters to be measured drives the user in the optimization of the OCD measurement strategy. Real examples are discussed revealing common pitfalls like hidden correlations and simulation results are compared with real measurements. Special emphasis is given to 2 different cases: 1) the optimization of the data set of multi-head metrology tools (NI-OCD, SE-OCD), 2) the optimization of the azimuth measurement angle in SE-OCD. With the uncertainty and sensitivity analysis result, the right data set and measurement mode (NI-OCD, SE-OCD or NI+SE OCD) can be easily selected to achieve the best OCD model performance.
Blurring the Inputs: A Natural Language Approach to Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Kleb, William L.; Thompson, Richard A.; Johnston, Christopher O.
2007-01-01
To document model parameter uncertainties and to automate sensitivity analyses for numerical simulation codes, a natural-language-based method to specify tolerances has been developed. With this new method, uncertainties are expressed in a natural manner, i.e., as one would on an engineering drawing, namely, 5.25 +/- 0.01. This approach is robust and readily adapted to various application domains because it does not rely on parsing the particular structure of input file formats. Instead, tolerances of a standard format are added to existing fields within an input file. As a demonstration of the power of this simple, natural language approach, a Monte Carlo sensitivity analysis is performed for three disparate simulation codes: fluid dynamics (LAURA), radiation (HARA), and ablation (FIAT). Effort required to harness each code for sensitivity analysis was recorded to demonstrate the generality and flexibility of this new approach.
Computational methods for efficient structural reliability and reliability sensitivity analysis
NASA Technical Reports Server (NTRS)
Wu, Y.-T.
1993-01-01
This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.
Parameter sensitivity analysis of IL-6 signalling pathways.
Chu, Y; Jayaraman, A; Hahn, J
2007-11-01
Signal transduction pathways generally consist of a large number of individual components and have an even greater number of parameters describing their reaction kinetics. Although the structure of some signalling pathways can be found in the literature, many of the parameters are not well known and they would need to be re-estimated from experimental data for each specific case. However it is not feasible to estimate hundreds of parameters because of the cost of the experiments associated with generating data. Parameter sensitivity analysis can address this situation as it investigates how the system behaviour is changed by variations of parameters and the analysis identifies which parameters play a key role in signal transduction. Only these important parameters need then be re-estimated using data from further experiments. This article presents a detailed parameter sensitivity analysis of the JAK/STAT and MAPK signal transduction pathway that is used for signalling by the cytokine IL-6. As no parameter sensitivity analysis technique is known to work best for all situations, a comparison of the results returned by four techniques is presented: differential analysis, the Morris method, a sampling-based approach and the Fourier amplitude sensitivity test. The recruitment of the transcription factor STAT3 to the dimer of the phosphorylated receptor complex is determined as the most important step by the sensitivity analysis. Additionally, the desphosphorylation of the nuclear STAT3 dimer by PP2 as well as feedback inhibition by SOCS3 are found to play an important role for signal transduction. PMID:18203580
Multicriteria Evaluation and Sensitivity Analysis on Information Security
NASA Astrophysics Data System (ADS)
Syamsuddin, Irfan
2013-05-01
Information security plays a significant role in recent information society. Increasing number and impact of cyber attacks on information assets have resulted the increasing awareness among managers that attack on information is actually attack on organization itself. Unfortunately, particular model for information security evaluation for management levels is still not well defined. In this study, decision analysis based on Ternary Analytic Hierarchy Process (T-AHP) is proposed as a novel model to aid managers who responsible in making strategic evaluation related to information security issues. In addition, sensitivity analysis is applied to extend our analysis by using several "what-if" scenarios in order to measure the consistency of the final evaluation. Finally, we conclude that the final evaluation made by managers has a significant consistency shown by sensitivity analysis results.
Recurrence quantification analysis of global stock markets
NASA Astrophysics Data System (ADS)
Bastos, João A.; Caiado, Jorge
2011-04-01
This study investigates the presence of deterministic dependencies in international stock markets using recurrence plots and recurrence quantification analysis (RQA). The results are based on a large set of free float-adjusted market capitalization stock indices, covering a period of 15 years. The statistical tests suggest that the dynamics of stock prices in emerging markets is characterized by higher values of RQA measures when compared to their developed counterparts. The behavior of stock markets during critical financial events, such as the burst of the technology bubble, the Asian currency crisis, and the recent subprime mortgage crisis, is analyzed by performing RQA in sliding windows. It is shown that during these events stock markets exhibit a distinctive behavior that is characterized by temporary decreases in the fraction of recurrence points contained in diagonal and vertical structures.
Beyond the GUM: variance-based sensitivity analysis in metrology
NASA Astrophysics Data System (ADS)
Lira, I.
2016-07-01
Variance-based sensitivity analysis is a well established tool for evaluating the contribution of the uncertainties in the inputs to the uncertainty in the output of a general mathematical model. While the literature on this subject is quite extensive, it has not found widespread use in metrological applications. In this article we present a succinct review of the fundamentals of sensitivity analysis, in a form that should be useful to most people familiarized with the Guide to the Expression of Uncertainty in Measurement (GUM). Through two examples, it is shown that in linear measurement models, no new knowledge is gained by using sensitivity analysis that is not already available after the terms in the so-called ‘law of propagation of uncertainties’ have been computed. However, if the model behaves non-linearly in the neighbourhood of the best estimates of the input quantities—and if these quantities are assumed to be statistically independent—sensitivity analysis is definitely advantageous for gaining insight into how they can be ranked according to their importance in establishing the uncertainty of the measurand.
Sensitivity analysis of the Ohio phosphorus risk index
Technology Transfer Automated Retrieval System (TEKTRAN)
The Phosphorus (P) Index is a widely used tool for assessing the vulnerability of agricultural fields to P loss; yet, few of the P Indices developed in the U.S. have been evaluated for their accuracy. Sensitivity analysis is one approach that can be used prior to calibration and field-scale testing ...
Omitted Variable Sensitivity Analysis with the Annotated Love Plot
ERIC Educational Resources Information Center
Hansen, Ben B.; Fredrickson, Mark M.
2014-01-01
The goal of this research is to make sensitivity analysis accessible not only to empirical researchers but also to the various stakeholders for whom educational evaluations are conducted. To do this it derives anchors for the omitted variable (OV)-program participation association intrinsically, using the Love plot to present a wide range of…
Global/local methods for probabilistic structural analysis
NASA Technical Reports Server (NTRS)
Millwater, H. R.; Wu, Y.-T.
1993-01-01
A probabilistic global/local method is proposed to reduce the computational requirements of probabilistic structural analysis. A coarser global model is used for most of the computations with a local more refined model used only at key probabilistic conditions. The global model is used to establish the cumulative distribution function (cdf) and the Most Probable Point (MPP). The local model then uses the predicted MPP to adjust the cdf value. The global/local method is used within the advanced mean value probabilistic algorithm. The local model can be more refined with respect to the g1obal model in terms of finer mesh, smaller time step, tighter tolerances, etc. and can be used with linear or nonlinear models. The basis for this approach is described in terms of the correlation between the global and local models which can be estimated from the global and local MPPs. A numerical example is presented using the NESSUS probabilistic structural analysis program with the finite element method used for the structural modeling. The results clearly indicate a significant computer savings with minimal loss in accuracy.
Breastfeeding policy: a globally comparative analysis
Raub, Amy; Earle, Alison
2013-01-01
Abstract Objective To explore the extent to which national policies guaranteeing breastfeeding breaks to working women may facilitate breastfeeding. Methods An analysis was conducted of the number of countries that guarantee breastfeeding breaks, the daily number of hours guaranteed, and the duration of guarantees. To obtain current, detailed information on national policies, original legislation as well as secondary sources on 182 of the 193 Member States of the United Nations were examined. Regression analyses were conducted to test the association between national policy and rates of exclusive breastfeeding while controlling for national income level, level of urbanization, female percentage of the labour force and female literacy rate. Findings Breastfeeding breaks with pay are guaranteed in 130 countries (71%) and unpaid breaks are guaranteed in seven (4%). No policy on breastfeeding breaks exists in 45 countries (25%). In multivariate models, the guarantee of paid breastfeeding breaks for at least 6 months was associated with an increase of 8.86 percentage points in the rate of exclusive breastfeeding (P < 0.05). Conclusion A greater percentage of women practise exclusive breastfeeding in countries where laws guarantee breastfeeding breaks at work. If these findings are confirmed in longitudinal studies, health outcomes could be improved by passing legislation on breastfeeding breaks in countries that do not yet ensure the right to breastfeed. PMID:24052676
Zhao, Huaying; Piszczek, Grzegorz; Schuck, Peter
2015-04-01
Isothermal titration calorimetry experiments can provide significantly more detailed information about molecular interactions when combined in global analysis. For example, global analysis can improve the precision of binding affinity and enthalpy, and of possible linkage parameters, even for simple bimolecular interactions, and greatly facilitate the study of multi-site and multi-component systems with competition or cooperativity. A pre-requisite for global analysis is the departure from the traditional binding model, including an 'n'-value describing unphysical, non-integral numbers of sites. Instead, concentration correction factors can be introduced to account for either errors in the concentration determination or for the presence of inactive fractions of material. SEDPHAT is a computer program that embeds these ideas and provides a graphical user interface for the seamless combination of biophysical experiments to be globally modeled with a large number of different binding models. It offers statistical tools for the rigorous determination of parameter errors, correlations, as well as advanced statistical functions for global ITC (gITC) and global multi-method analysis (GMMA). SEDPHAT will also take full advantage of error bars of individual titration data points determined with the unbiased integration software NITPIC. The present communication reviews principles and strategies of global analysis for ITC and its extension to GMMA in SEDPHAT. We will also introduce a new graphical tool for aiding experimental design by surveying the concentration space and generating simulated data sets, which can be subsequently statistically examined for their information content. This procedure can replace the 'c'-value as an experimental design parameter, which ceases to be helpful for multi-site systems and in the context of gITC. PMID:25477226
Adjoint-based sensitivity analysis for reactor safety applications
Parks, C.V.
1986-08-01
The application and usefulness of an adjoint-based methodology for performing sensitivity analysis on reactor safety computer codes is investigated. The adjoint-based methodology, referred to as differential sensitivity theory (DST), provides first-order derivatives of the calculated quantities of interest (responses) with respect to the input parameters. The basic theoretical development of DST is presented along with the needed general extensions for consideration of model discontinuities and a variety of useful response definitions. A simple analytic problem is used to highlight the general DST procedures. finally, DST procedures presented in this work are applied to two highly nonlinear reactor accident analysis codes: (1) FASTGAS, a relatively small code for analysis of a loss-of-decay-heat-removal accident in a gas-cooled fast reactor, and (2) an existing code called VENUS-II which has been employed for analyzing the core disassembly phase of a hypothetical fast reactor accident. The two codes are different both in terms of complexity and in terms of the facets of DST which can be illustrated. Sensitivity results from the adjoint codes ADJGAS and VENUS-ADJ are verified with direct recalcualtions using perturbed input parameters. The effectiveness of the DST results for parameter ranking, prediction of response changes, and uncertainty analysis are illustrated. The conclusion drawn from this study is that DST is a viable, cost-effective methodology for accurate sensitivity analysis. In addition, a useful sensitivity tool for use in the fast reactor safety area has been developed in VENUS-ADJ. Future work needs to concentrate on combining the accurate first-order derivatives/results from DST with existing methods (based solely on direct recalculations) for higher-order response surfaces.
Integrative "omic" analysis for tamoxifen sensitivity through cell based models.
Weng, Liming; Ziliak, Dana; Lacroix, Bonnie; Geeleher, Paul; Huang, R Stephanie
2014-01-01
It has long been observed that tamoxifen sensitivity varies among breast cancer patients. Further, ethnic differences of tamoxifen therapy between Caucasian and African American have also been reported. Since most studies have been focused on Caucasian people, we sought to comprehensively evaluate genetic variants related to tamoxifen therapy in African-derived samples. An integrative "omic" approach developed by our group was used to investigate relationships among endoxifen (an active metabolite of tamoxifen) sensitivity, SNP genotype, mRNA and microRNA expressions in 58 HapMap YRI lymphoblastoid cell lines. We identified 50 SNPs that associate with cellular sensitivity to endoxifen through their effects on 34 genes and 30 microRNA expression. Some of these findings are shared in both Caucasian and African samples, while others are unique in the African samples. Among gene/microRNA that were identified in both ethnic groups, the expression of TRAF1 is also correlated with tamoxifen sensitivity in a collection of 44 breast cancer cell lines. Further, knock-down TRAF1 and over-expression of hsa-let-7i confirmed the roles of hsa-let-7i and TRAF1 in increasing tamoxifen sensitivity in the ZR-75-1 breast cancer cell line. Our integrative omic analysis facilitated the discovery of pharmacogenomic biomarkers that potentially affect tamoxifen sensitivity. PMID:24699530
NASA Astrophysics Data System (ADS)
Anenberg, S.; Talgo, K.; Dolwick, P.; Jang, C.; Arunachalam, S.; West, J.
2010-12-01
Black carbon (BC), a component of fine particulate matter (PM2.5) released during incomplete combustion, is associated with atmospheric warming and deleterious health impacts, including premature cardiopulmonary and lung cancer mortality. A growing body of literature suggests that controlling emissions may therefore have dual benefits for climate and health. Several studies have focused on quantifying the potential impacts of reducing BC emissions from various world regions and economic sectors on radiative forcing. However, the impacts of these reductions on human health have been less well studied. Here, we use a global chemical transport model (MOZART-4) and a health impact function to quantify the surface air quality and human health benefits of controlling BC emissions. We simulate a base case and several emission control scenarios, where anthropogenic BC emissions are reduced by half globally, individually in each of eight world regions, and individually from the residential, industrial, and transportation sectors. We also simulate a global 50% reduction of both BC and organic carbon (OC) together, since they are co-emitted and both are likely to be impacted by actual control measures. Meteorology and biomass burning emissions are for the year 2002 with anthropogenic BC and OC emissions for 2000 from the IPCC AR5 inventory. Model performance is evaluated by comparing to global surface measurements of PM2.5 components. Avoided premature mortalities are calculated using the change in PM2.5 concentration between the base case and emission control scenarios and a concentration-response factor for chronic mortality from the epidemiology literature.
Uncertainty and Sensitivity Analysis of Afterbody Radiative Heating Predictions for Earth Entry
NASA Technical Reports Server (NTRS)
West, Thomas K., IV; Johnston, Christopher O.; Hosder, Serhat
2016-01-01
The objective of this work was to perform sensitivity analysis and uncertainty quantification for afterbody radiative heating predictions of Stardust capsule during Earth entry at peak afterbody radiation conditions. The radiation environment in the afterbody region poses significant challenges for accurate uncertainty quantification and sensitivity analysis due to the complexity of the flow physics, computational cost, and large number of un-certain variables. In this study, first a sparse collocation non-intrusive polynomial chaos approach along with global non-linear sensitivity analysis was used to identify the most significant uncertain variables and reduce the dimensions of the stochastic problem. Then, a total order stochastic expansion was constructed over only the important parameters for an efficient and accurate estimate of the uncertainty in radiation. Based on previous work, 388 uncertain parameters were considered in the radiation model, which came from the thermodynamics, flow field chemistry, and radiation modeling. The sensitivity analysis showed that only four of these variables contributed significantly to afterbody radiation uncertainty, accounting for almost 95% of the uncertainty. These included the electronic- impact excitation rate for N between level 2 and level 5 and rates of three chemical reactions in uencing N, N(+), O, and O(+) number densities in the flow field.
Self-validated Variance-based Methods for Sensitivity Analysis of Model Outputs
Tong, C
2009-04-20
Global sensitivity analysis (GSA) has the advantage over local sensitivity analysis in that GSA does not require strong model assumptions such as linearity or monotonicity. As a result, GSA methods such as those based on variance decomposition are well-suited to multi-physics models, which are often plagued by large nonlinearities. However, as with many other sampling-based methods, inadequate sample size can badly pollute the result accuracies. A natural remedy is to adaptively increase the sample size until sufficient accuracy is obtained. This paper proposes an iterative methodology comprising mechanisms for guiding sample size selection and self-assessing result accuracy. The elegant features in the the proposed methodology are the adaptive refinement strategies for stratified designs. We first apply this iterative methodology to the design of a self-validated first-order sensitivity analysis algorithm. We also extend this methodology to design a self-validated second-order sensitivity analysis algorithm based on refining replicated orthogonal array designs. Several numerical experiments are given to demonstrate the effectiveness of these methods.
NASA Astrophysics Data System (ADS)
Fersch, Benjamin; Kunstmann, Harald
2014-05-01
Driving data and physical parametrizations can significantly impact the performance of regional dynamical atmospheric models in reproducing hydrometeorologically relevant variables. Our study addresses the water budget sensitivity of the Weather Research and Forecasting Model System WRF (WRF-ARW) with respect to two cumulus parametrizations (Kain-Fritsch, Betts-Miller-Janjić), two global driving reanalyses (ECMWF ERA-INTERIM and NCAR/NCEP NNRP), time variant and invariant sea surface temperature and optional gridded nudging. The skill of global and downscaled models is evaluated against different gridded observations for precipitation, 2 m-temperature, evapotranspiration, and against measured discharge time-series on a monthly basis. Multi-year spatial deviation patterns and basin aggregated time series are examined for four globally distributed regions with different climatic characteristics: Siberia, Northern and Western Africa, the Central Australian Plane, and the Amazonian tropics. The simulations cover the period from 2003 to 2006 with a horizontal mesh of 30 km. The results suggest a high sensitivity of the physical parametrizations and the driving data on the water budgets of the regional atmospheric simulations. While the global reanalyses tend to underestimate 2 m-temperature by 0.2-2 K, the regional simulations are typically 0.5-3 K warmer than observed. Many configurations show difficulties in reproducing the water budget terms, e.g. with long-term mean precipitation biases of 150 mm month-1 and higher. Nevertheless, with the water budget analysis viable setups can be deduced for all four study regions.
Li, Peiyue; Wu, Jianhua; Qian, Hui; Chen, Jie
2013-03-01
This is the second part of the study on sensitivity analysis of the technique for order preference by similarity to ideal solution (TOPSIS) method in water quality assessment. In the present study, the sensitivity of the TOPSIS method to the index input data was investigated. The sensitivity was first theoretically analyzed under two major assumptions. One assumption was that one index or more of the samples were perturbed with the same ratio while other indices kept unchanged. The other one was that all indices of a given sample were changed simultaneously with the same ratio, while the indices of other samples were unchanged. Furthermore, a case study under assumption 2 was also carried out in this paper. When the same indices of different water samples are changed simultaneously with the same variation ratio, the final water quality assessment results will not be influenced at all. When the input data of all indices of a given sample are perturbed with the same variation ratio, the assessment values of all samples will be influenced theoretically. However, the case study shows that only the perturbed sample is sensitive to the variation, and a simple linear equation representing the relation between the closeness coefficient (CC) values of the perturbed sample and variation ratios can be derived under the assumption 2. This linear equation can be used for determining the sample orders under various variation ratios. PMID:22832843
Nonlinear mathematical modeling and sensitivity analysis of hydraulic drive unit
NASA Astrophysics Data System (ADS)
Kong, Xiangdong; Yu, Bin; Quan, Lingxiao; Ba, Kaixian; Wu, Liujie
2015-09-01
The previous sensitivity analysis researches are not accurate enough and also have the limited reference value, because those mathematical models are relatively simple and the change of the load and the initial displacement changes of the piston are ignored, even experiment verification is not conducted. Therefore, in view of deficiencies above, a nonlinear mathematical model is established in this paper, including dynamic characteristics of servo valve, nonlinear characteristics of pressure-flow, initial displacement of servo cylinder piston and friction nonlinearity. The transfer function block diagram is built for the hydraulic drive unit closed loop position control, as well as the state equations. Through deriving the time-varying coefficient items matrix and time-varying free items matrix of sensitivity equations respectively, the expression of sensitivity equations based on the nonlinear mathematical model are obtained. According to structure parameters of hydraulic drive unit, working parameters, fluid transmission characteristics and measured friction-velocity curves, the simulation analysis of hydraulic drive unit is completed on the MATLAB/Simulink simulation platform with the displacement step 2 mm, 5 mm and 10 mm, respectively. The simulation results indicate that the developed nonlinear mathematical model is sufficient by comparing the characteristic curves of experimental step response and simulation step response under different constant load. Then, the sensitivity function time-history curves of seventeen parameters are obtained, basing on each state vector time-history curve of step response characteristic. The maximum value of displacement variation percentage and the sum of displacement variation absolute values in the sampling time are both taken as sensitivity indexes. The sensitivity indexes values above are calculated and shown visually in histograms under different working conditions, and change rules are analyzed. Then the sensitivity
NASA Astrophysics Data System (ADS)
Jain, A. K.; Meiyappan, P.; Song, Y.; Barman, R.
2011-12-01
This presentation explores the sensitivity of terrestrial ecosystems and atmospheric exchange of carbon to global environmental factors to advance our understanding of uncertainty in CO2 projections. We use a land surface model, the Integrated Science Assessment Model (ISAM) recently coupled into the NCAR Community Earth System Model (CESM1) framework to evaluate ecosystem variability due to climatic and anthropogenic factors. The factors considered here include climate change, increasing ambient CO2 concentrations, anthropogenic nitrogen deposition, and land use change (LUC) activities such as clearing of land for agriculture, pasture, and wood harvest. Each factor has a potential to influence the net ecosystem exchange (NEE) of CO2. Using the ISAM-CESM modeling framework, we evaluate the individual and concurrent effects of all these environmental factors on the terrestrial NEE over the 20th century and the 21st century. The ISAM biogeochemical cycles consist of fully prognostic carbon and nitrogen dynamics associated with changes in land cover, litter decomposition, and soil organic matter. The ISAM biophysical model accounts for water and energy processes in the vegetation and soil column, integrated over a time step of 30 minutes. The newly available CRU-NCEP climate forcing data (1850-2010, 0.5ox0.5o spatial resolution) will be used for the historical period simulations. The 21st century simulations will be carried out using the Representative Concentration Pathway (RCP) storylines. This study will help quantify the importance of various environmental factors towards modeling land-atmosphere carbon exchange and better understand model related differences in CO2 estimates.
On global energy scenario, dye-sensitized solar cells and the promise of nanotechnology.
Reddy, K Govardhan; Deepak, T G; Anjusree, G S; Thomas, Sara; Vadukumpully, Sajini; Subramanian, K R V; Nair, Shantikumar V; Nair, A Sreekumaran
2014-04-21
One of the major problems that humanity has to face in the next 50 years is the energy crisis. The rising population, rapidly changing life styles of people, heavy industrialization and changing landscape of cities have increased energy demands, enormously. The present annual worldwide electricity consumption is 12 TW and is expected to become 24 TW by 2050, leaving a challenging deficit of 12 TW. The present energy scenario of using fossil fuels to meet the energy demand is unable to meet the increase in demand effectively, as these fossil fuel resources are non-renewable and limited. Also, they cause significant environmental hazards, like global warming and the associated climatic issues. Hence, there is an urgent necessity to adopt renewable sources of energy, which are eco-friendly and not extinguishable. Of the various renewable sources available, such as wind, tidal, geothermal, biomass, solar, etc., solar serves as the most dependable option. Solar energy is freely and abundantly available. Once installed, the maintenance cost is very low. It is eco-friendly, safely fitting into our society without any disturbance. Producing electricity from the Sun requires the installation of solar panels, which incurs a huge initial cost and requires large areas of lands for installation. This is where nanotechnology comes into the picture and serves the purpose of increasing the efficiency to higher levels, thus bringing down the overall cost for energy production. Also, emerging low-cost solar cell technologies, e.g. thin film technologies and dye-sensitized solar cells (DSCs) help to replace the use of silicon, which is expensive. Again, nanotechnological implications can be applied in these solar cells, to achieve higher efficiencies. This paper vividly deals with the various available solar cells, choosing DSCs as the most appropriate ones. The nanotechnological implications which help to improve their performance are dealt with, in detail. Additionally, the
Drought-Net: A global network to assess terrestrial ecosystem sensitivity to drought
NASA Astrophysics Data System (ADS)
Smith, Melinda; Sala, Osvaldo; Phillips, Richard
2015-04-01
All ecosystems will be impacted to some extent by climate change, with forecasts for more frequent and severe drought likely to have the greatest impact on terrestrial ecosystems. Terrestrial ecosystems are known to vary dramatically in their responses to drought. However, the factors that may make some ecosystems respond more or less than others remains unknown, but such understanding is critical for predicting drought impacts at regional and continental scales. To effectively forecast terrestrial ecosystem responses to drought, ecologists must assess responses of a range of different ecosystems to drought, and then improve existing models by incorporating the factors that cause such variation in response. Traditional site-based research cannot provide this knowledge because experiments conducted at individual sites are often not directly comparable due to differences in methodologies employed. Coordinated experimental networks, with identical protocols and comparable measurements, are ideally suited for comparative studies at regional to global scales. The US National Science Foundation-funded Drought-Net Research Coordination Network (www.drought-net.org) will advance understanding of the determinants of terrestrial ecosystem responses to drought by bringing together an international group of scientists to conduct two key activities conducted over the next five years: 1) planning and coordinating new research using standardized measurements to leverage the value of existing drought experiments across the globe (Enhancing Existing Experiments, EEE), and 2) finalizing the design and facilitating the establishment of a new international network of coordinated drought experiments (the International Drought Experiment, IDE). The primary goals of these activities are to assess: (1) patterns of differential terrestrial ecosystem sensitivity to drought and (2) potential mechanisms underlying those patterns.
Annual flood sensitivities to El Niño-Southern Oscillation at the global scale
Ward, Philip J.; Eisner, S.; Flörke, M.; Dettinger, Michael D.; Kummu, M.
2013-01-01
Floods are amongst the most dangerous natural hazards in terms of economic damage. Whilst a growing number of studies have examined how river floods are influenced by climate change, the role of natural modes of interannual climate variability remains poorly understood. We present the first global assessment of the influence of El Niño–Southern Oscillation (ENSO) on annual river floods, defined here as the peak daily discharge in a given year. The analysis was carried out by simulating daily gridded discharges using the WaterGAP model (Water – a Global Assessment and Prognosis), and examining statistical relationships between these discharges and ENSO indices. We found that, over the period 1958–2000, ENSO exerted a significant influence on annual floods in river basins covering over a third of the world's land surface, and that its influence on annual floods has been much greater than its influence on average flows. We show that there are more areas in which annual floods intensify with La Niña and decline with El Niño than vice versa. However, we also found that in many regions the strength of the relationships between ENSO and annual floods have been non-stationary, with either strengthening or weakening trends during the study period. We discuss the implications of these findings for science and management. Given the strong relationships between ENSO and annual floods, we suggest that more research is needed to assess relationships between ENSO and flood impacts (e.g. loss of lives or economic damage). Moreover, we suggest that in those regions where useful relationships exist, this information could be combined with ongoing advances in ENSO prediction research, in order to provide year-to-year probabilistic flood risk forecasts.
Design sensitivity analysis of rotorcraft airframe structures for vibration reduction
NASA Technical Reports Server (NTRS)
Murthy, T. Sreekanta
1987-01-01
Optimization of rotorcraft structures for vibration reduction was studied. The objective of this study is to develop practical computational procedures for structural optimization of airframes subject to steady-state vibration response constraints. One of the key elements of any such computational procedure is design sensitivity analysis. A method for design sensitivity analysis of airframes under vibration response constraints is presented. The mathematical formulation of the method and its implementation as a new solution sequence in MSC/NASTRAN are described. The results of the application of the method to a simple finite element stick model of the AH-1G helicopter airframe are presented and discussed. Selection of design variables that are most likely to bring about changes in the response at specified locations in the airframe is based on consideration of forced response strain energy. Sensitivity coefficients are determined for the selected design variable set. Constraints on the natural frequencies are also included in addition to the constraints on the steady-state response. Sensitivity coefficients for these constraints are determined. Results of the analysis and insights gained in applying the method to the airframe model are discussed. The general nature of future work to be conducted is described.
Sensitivity Analysis of Chaotic Flow around Two-Dimensional Airfoil
NASA Astrophysics Data System (ADS)
Blonigan, Patrick; Wang, Qiqi; Nielsen, Eric; Diskin, Boris
2015-11-01
Computational methods for sensitivity analysis are invaluable tools for fluid dynamics research and engineering design. These methods are used in many applications, including aerodynamic shape optimization and adaptive grid refinement. However, traditional sensitivity analysis methods, including the adjoint method, break down when applied to long-time averaged quantities in chaotic fluid flow fields, such as high-fidelity turbulence simulations. This break down is due to the ``Butterfly Effect'' the high sensitivity of chaotic dynamical systems to the initial condition. A new sensitivity analysis method developed by the authors, Least Squares Shadowing (LSS), can compute useful and accurate gradients for quantities of interest in chaotic dynamical systems. LSS computes gradients using the ``shadow trajectory'', a phase space trajectory (or solution) for which perturbations to the flow field do not grow exponentially in time. To efficiently compute many gradients for one objective function, we use an adjoint version of LSS. This talk will briefly outline Least Squares Shadowing and demonstrate it on chaotic flow around a Two-Dimensional airfoil.
Double Precision Differential/Algebraic Sensitivity Analysis Code
1995-06-02
DDASAC solves nonlinear initial-value problems involving stiff implicit systems of ordinary differential and algebraic equations. Purely algebraic nonlinear systems can also be solved, given an initial guess within the region of attraction of a solution. Options include automatic reconciliation of inconsistent initial states and derivatives, automatic initial step selection, direct concurrent parametric sensitivity analysis, and stopping at a prescribed value of any user-defined functional of the current solution vector. Local error control (in the max-normmore » or the 2-norm) is provided for the state vector and can include the sensitivities on request.« less
Sensitivity Analysis Of Technological And Material Parameters In Roll Forming
NASA Astrophysics Data System (ADS)
Gehring, Albrecht; Saal, Helmut
2007-05-01
Roll forming is applied for several decades to manufacture thin gauged profiles. However, the knowledge about this technology is still based on empirical approaches. Due to the complexity of the forming process, the main effects on profile properties are difficult to identify. This is especially true for the interaction of technological parameters and material parameters. General considerations for building a finite-element model of the roll forming process are given in this paper. A sensitivity analysis is performed on base of a statistical design approach in order to identify the effects and interactions of different parameters on profile properties. The parameters included in the analysis are the roll diameter, the rolling speed, the sheet thickness, friction between the tools and the sheet and the strain hardening behavior of the sheet material. The analysis includes an isotropic hardening model and a nonlinear kinematic hardening model. All jobs are executed parallel to reduce the overall time as the sensitivity analysis requires much CPU-time. The results of the sensitivity analysis demonstrate the opportunities to improve the properties of roll formed profiles by adjusting technological and material parameters to their optimum interacting performance.
NASA Astrophysics Data System (ADS)
Yu, Lisan; Jin, Xiangze
2014-10-01
This study presented an uncertainty assessment of the high-resolution global analysis of daily-mean ocean-surface vector winds (1987 onward) by the Objectively Analyzed air-sea Fluxes (OAFlux) project. The time series was synthesized from multiple satellite sensors using a variational approach to find a best fit to input data in a weighted least-squares cost function. The variational framework requires the a priori specification of the weights, or equivalently, the error covariances of input data, which are seldom known. Two key issues were investigated. The first issue examined the specification of the weights for the OAFlux synthesis. This was achieved by designing a set of weight-varying experiments and applying the criteria requiring that the chosen weights should make the best-fit of the cost function be optimal with regard to both input satellite observations and the independent wind time series measurements at 126 buoy locations. The weights thus determined represent an approximation to the error covariances, which inevitably contain a degree of uncertainty. Hence, the second issue addressed the sensitivity of the OAFlux synthesis to the uncertainty in the weight assignments. Weight perturbation experiments were conducted and ensemble statistics were used to estimate the sensitivity. The study showed that the leading sources of uncertainty for the weight selection are high winds (>15 ms-1) and heavy rain, which are the conditions that cause divergence in wind retrievals from different sensors. Future technical advancement made in wind retrieval algorithms would be key to further improvement of the multisensory synthesis in events of severe storms.
Characterizing patterns of global land use: An analysis of global croplands data
NASA Astrophysics Data System (ADS)
Ramankutty, Navin; Foley, Jonathan A.
1998-12-01
Human activities have shaped significantly the state of terrestrial ecosystems throughout the world. One of the most direct manifestations of human activity within the biosphere has been the conversion of natural ecosystems to croplands. In this study, we present an analysis of the geographic distribution and spatial extent of permanent croplands. This analysis represents the area in permanent croplands during the early 1990s for each grid cell on a global 5 min (˜10 km) resolution latitude-longitude grid. To create this data set, we have combined a satellite-derived land cover data set with a variety of national and subnational agricultural inventory data. A simple calibration algorithm was used so that the spatial land cover data were generally consistent with nonspatial agricultural inventory data. The spatial distribution of croplands represented in this analysis presents a quantitative depiction of global agricultural geography. The regions of the world known to have intense cultivation (e.g., the North American corn belt, the European wheat-corn belt, the Ganges floodplain, and eastern China) are clearly portrayed in this analysis. It also captures the less intensely cultivated regions of the world, usually surrounding the regions mentioned above, and regions characterized by subsistence agriculture (e.g., Sahelian Africa). Data generated from this kind of analysis can be used within global climate models and global ecosystem models to assess the importance of permanent croplands on environmental processes. In particular, these data, combined with models, could help evaluate the role of changing land cover on regional climate and carbon cycling. Future efforts will need to concentrate on other land use systems, including pastures and regions of shifting cultivation. Furthermore, land use and land cover data must be extended to include an historical dimension so as to evaluate the changing state of the biosphere over time. This article contains supplementary
Sensitivity and Uncertainty Analysis to Burn-up Estimates on ADS Using ACAB Code
Cabellos, O; Sanz, J; Rodriguez, A; Gonzalez, E; Embid, M; Alvarez, F; Reyes, S
2005-02-11
Within the scope of the Accelerator Driven System (ADS) concept for nuclear waste management applications, the burnup uncertainty estimates due to uncertainty in the activation cross sections (XSs) are important regarding both the safety and the efficiency of the waste burning process. We have applied both sensitivity analysis and Monte Carlo methodology to actinides burnup calculations in a lead-bismuth cooled subcritical ADS. The sensitivity analysis is used to identify the reaction XSs and the dominant chains that contribute most significantly to the uncertainty. The Monte Carlo methodology gives the burnup uncertainty estimates due to the synergetic/global effect of the complete set of XS uncertainties. These uncertainty estimates are valuable to assess the need of any experimental or systematic reevaluation of some uncertainty XSs for ADS.
Sensitivity and Uncertainty Analysis to Burnup Estimates on ADS using the ACAB Code
Cabellos, O.; Sanz, J.; Rodriguez, A.; Gonzalez, E.; Embid, M.; Alvarez, F.; Reyes, S.
2005-05-24
Within the scope of the Accelerator Driven System (ADS) concept for nuclear waste management applications, the burnup uncertainty estimates due to uncertainty in the activation cross sections (XSs) are important regarding both the safety and the efficiency of the waste burning process. We have applied both sensitivity analysis and Monte Carlo methodology to actinides burnup calculations in a lead-bismuth cooled subcritical ADS. The sensitivity analysis is used to identify the reaction XSs and the dominant chains that contribute most significantly to the uncertainty. The Monte Carlo methodology gives the burnup uncertainty estimates due to the synergetic/global effect of the complete set of XS uncertainties. These uncertainty estimates are valuable to assess the need of any experimental or systematic re-evaluation of some uncertainty XSs for ADS.
Sensitivity analysis of infectious disease models: methods, advances and their application.
Wu, Jianyong; Dhingra, Radhika; Gambhir, Manoj; Remais, Justin V
2013-09-01
Sensitivity analysis (SA) can aid in identifying influential model parameters and optimizing model structure, yet infectious disease modelling has yet to adopt advanced SA techniques that are capable of providing considerable insights over traditional methods. We investigate five global SA methods-scatter plots, the Morris and Sobol' methods, Latin hypercube sampling-partial rank correlation coefficient and the sensitivity heat map method-and detail their relative merits and pitfalls when applied to a microparasite (cholera) and macroparasite (schistosomaisis) transmission model. The methods investigated yielded similar results with respect to identifying influential parameters, but offered specific insights that vary by method. The classical methods differed in their ability to provide information on the quantitative relationship between parameters and model output, particularly over time. The heat map approach provides information about the group sensitivity of all model state variables, and the parameter sensitivity spectrum obtained using this method reveals the sensitivity of all state variables to each parameter over the course of the simulation period, especially valuable for expressing the dynamic sensitivity of a microparasite epidemic model to its parameters. A summary comparison is presented to aid infectious disease modellers in selecting appropriate methods, with the goal of improving model performance and design. PMID:23864497
Sensitivity analysis of infectious disease models: methods, advances and their application
Wu, Jianyong; Dhingra, Radhika; Gambhir, Manoj; Remais, Justin V.
2013-01-01
Sensitivity analysis (SA) can aid in identifying influential model parameters and optimizing model structure, yet infectious disease modelling has yet to adopt advanced SA techniques that are capable of providing considerable insights over traditional methods. We investigate five global SA methods—scatter plots, the Morris and Sobol’ methods, Latin hypercube sampling-partial rank correlation coefficient and the sensitivity heat map method—and detail their relative merits and pitfalls when applied to a microparasite (cholera) and macroparasite (schistosomaisis) transmission model. The methods investigated yielded similar results with respect to identifying influential parameters, but offered specific insights that vary by method. The classical methods differed in their ability to provide information on the quantitative relationship between parameters and model output, particularly over time. The heat map approach provides information about the group sensitivity of all model state variables, and the parameter sensitivity spectrum obtained using this method reveals the sensitivity of all state variables to each parameter over the course of the simulation period, especially valuable for expressing the dynamic sensitivity of a microparasite epidemic model to its parameters. A summary comparison is presented to aid infectious disease modellers in selecting appropriate methods, with the goal of improving model performance and design. PMID:23864497
NASA Astrophysics Data System (ADS)
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-03-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.
Global Analysis of Horizontal Gene Transfer in Fusarium verticillioides
Technology Transfer Automated Retrieval System (TEKTRAN)
The co-occurrence of microbes within plants and other specialized niches may facilitate horizontal gene transfer (HGT) affecting host-pathogen interactions. We recently identified fungal-to-fungal HGTs involving metabolic gene clusters. For a global analysis of HGTs in the maize pathogen Fusarium ve...
Teaching Reading: Mexico's Global Method of Structural Analysis.
ERIC Educational Resources Information Center
Orozco, Cecilio
In 1985, the Global Method of Structural Analysis (GMSA) for teaching reading was introduced to first and second graders in Mexico. Breaking away from the more traditional educational methods, it established a basis for more flexible education and effectively utilized critical thinking skills. The preparation stage (reading readiness) begins in…
Ecological network analysis on global virtual water trade.
Yang, Zhifeng; Mao, Xufeng; Zhao, Xu; Chen, Bin
2012-02-01
Global water interdependencies are likely to increase with growing virtual water trade. To address the issues of the indirect effects of water trade through the global economic circulation, we use ecological network analysis (ENA) to shed insight into the complicated system interactions. A global model of virtual water flow among agriculture and livestock production trade in 1995-1999 is also built as the basis for network analysis. Control analysis is used to identify the quantitative control or dependency relations. The utility analysis provides more indicators for describing the mutual relationship between two regions/countries by imitating the interactions in the ecosystem and distinguishes the beneficiary and the contributor of virtual water trade system. Results show control and utility relations can well depict the mutual relation in trade system, and direct observable relations differ from integral ones with indirect interactions considered. This paper offers a new way to depict the interrelations between trade components and can serve as a meaningful start as we continue to use ENA in providing more valuable implications for freshwater study on a global scale. PMID:22243129
Global Analysis of Helicity PDFs: past - present - future
de Florian, D.; Stratmann, M.; Sassot, R.; Vogelsang, W.
2011-04-11
We discuss the current status of the DSSV global analysis of helicity-dependent parton densities. A comparison with recent semi-inclusive DIS data from COMPASS is presented, and constraints on the polarized strangeness density are examined in some detail.
Globalization and International Student Mobility: A Network Analysis
ERIC Educational Resources Information Center
Shields, Robin
2013-01-01
This article analyzes changes to the network of international student mobility in higher education over a 10-year period (1999-2008). International student flows have increased rapidly, exceeding 3 million in 2009, and extensive data on mobility provide unique insight into global educational processes. The analysis is informed by three theoretical…
Global/local finite element analysis for textile composites
NASA Technical Reports Server (NTRS)
Woo, Kyeongsik; Whitcomb, John
1993-01-01
Conventional analysis of textile composites is impractical because of the complex microstructure. Global/local methodology combined with special macro elements is proposed herein as a practical alternative. Initial tests showed dramatic reductions in the computational effort with only small loss in accuracy.
Comparative Analysis, Global Policy Studies and the Human Condition.
ERIC Educational Resources Information Center
Bertsch, Gary K.
This paper examines the role that comparative analysis and global policy studies can play in explaining the human condition in the contemporary world. It investigates economic well-being, one dimension of the human condition, and examines some of the attributes that represent it and some of the forces that affect it in villages, social groupings,…
NASA Astrophysics Data System (ADS)
Peters, K.; Stier, P.; Quaas, J.; Graßl, H.
2012-07-01
In this study, we employ the global aerosol-climate model ECHAM-HAM to globally assess aerosol indirect effects (AIEs) resulting from shipping emissions of aerosols and aerosol precursor gases. We implement shipping emissions of sulphur dioxide (SO2), black carbon (BC) and particulate organic matter (POM) for the year 2000 into the model and quantify the model's sensitivity towards uncertainties associated with the emission parameterisation as well as with the shipping emissions themselves. Sensitivity experiments are designed to investigate (i) the uncertainty in the size distribution of emitted particles, (ii) the uncertainty associated with the total amount of emissions, and (iii) the impact of reducing carbonaceous emissions from ships. We use the results from one sensitivity experiment for a detailed discussion of shipping-induced changes in the global aerosol system as well as the resulting impact on cloud properties. From all sensitivity experiments, we find AIEs from shipping emissions to range from -0.32 ± 0.01 W m-2 to -0.07 ± 0.01 W m-2 (global mean value and inter-annual variability as a standard deviation). The magnitude of the AIEs depends much more on the assumed emission size distribution and subsequent aerosol microphysical interactions than on the magnitude of the emissions themselves. It is important to note that although the strongest estimate of AIEs from shipping emissions in this study is relatively large, still much larger estimates have been reported in the literature before on the basis of modelling studies. We find that omitting just carbonaceous particle emissions from ships favours new particle formation in the boundary layer. These newly formed particles contribute just about as much to the CCN budget as the carbonaceous particles would, leaving the globally averaged AIEs nearly unaltered compared to a simulation including carbonaceous particle emissions from ships.
NASA Astrophysics Data System (ADS)
Peters, K.; Stier, P.; Quaas, J.; Graßl, H.
2012-03-01
In this study, we employ the global aerosol-climate model ECHAM-HAM to globally assess aerosol indirect effects (AIEs) resulting from shipping emissions of aerosols and aerosol precursor gases. We implement shipping emissions of sulphur dioxide (SO2), black carbon (BC) and particulate organic matter (POM) for the year 2000 into the model and quantify the model's sensitivity towards uncertainties associated with the emission parameterisation as well as with the shipping emissions themselves. Sensitivity experiments are designed to investigate (i) the uncertainty in the size distribution of emitted particles, (ii) the uncertainty associated with the total amount of emissions, and (iii) the impact of reducing carbonaceous emissions from ships. We use the results from one sensitivity experiment for a detailed discussion of shipping-induced changes in the global aerosol system as well as the resulting impact on cloud properties. From all sensitivity experiments, we find AIEs from shipping emissions to range from -0.07 ± 0.01 W m-2 to -0.32 ± 0.01 W m-2 (global mean value and inter-annual variability as a standard deviation). The magnitude of the AIEs depends much more on the assumed emission size distribution and subsequent aerosol microphysical interactions than on the magnitude of the emissions themselves. It is important to note that although the strongest estimate of AIEs from shipping emissions in this study is relatively large, still much larger estimates have been reported in the literature before on the basis of modelling studies. We find that omitting just carbonaceous particle emissions from ships favours new particle formation in the boundary layer. These newly formed particles contribute just about as much to the CCN budget as the carbonaceous particles would, leaving the globally averaged AIEs nearly unaltered compared to a simulation including carbonaceous particle emissions from ships.
Global land cover mapping: a review and uncertainty analysis
Congalton, Russell G.; Gu, Jianyu; Yadav, Kamini; Thenkabail, Prasad S.; Ozdogan, Mutlu
2014-01-01
Given the advances in remotely sensed imagery and associated technologies, several global land cover maps have been produced in recent times including IGBP DISCover, UMD Land Cover, Global Land Cover 2000 and GlobCover 2009. However, the utility of these maps for specific applications has often been hampered due to considerable amounts of uncertainties and inconsistencies. A thorough review of these global land cover projects including evaluating the sources of error and uncertainty is prudent and enlightening. Therefore, this paper describes our work in which we compared, summarized and conducted an uncertainty analysis of the four global land cover mapping projects using an error budget approach. The results showed that the classification scheme and the validation methodology had the highest error contribution and implementation priority. A comparison of the classification schemes showed that there are many inconsistencies between the definitions of the map classes. This is especially true for the mixed type classes for which thresholds vary for the attributes/discriminators used in the classification process. Examination of these four global mapping projects provided quite a few important lessons for the future global mapping projects including the need for clear and uniform definitions of the classification scheme and an efficient, practical, and valid design of the accuracy assessment.
Efficient sensitivity analysis and optimization of a helicopter rotor
NASA Technical Reports Server (NTRS)
Lim, Joon W.; Chopra, Inderjit
1989-01-01
Aeroelastic optimization of a system essentially consists of the determination of the optimum values of design variables which minimize the objective function and satisfy certain aeroelastic and geometric constraints. The process of aeroelastic optimization analysis is illustrated. To carry out aeroelastic optimization effectively, one needs a reliable analysis procedure to determine steady response and stability of a rotor system in forward flight. The rotor dynamic analysis used in the present study developed inhouse at the University of Maryland is based on finite elements in space and time. The analysis consists of two major phases: vehicle trim and rotor steady response (coupled trim analysis), and aeroelastic stability of the blade. For a reduction of helicopter vibration, the optimization process requires the sensitivity derivatives of the objective function and aeroelastic stability constraints. For this, the derivatives of steady response, hub loads and blade stability roots are calculated using a direct analytical approach. An automated optimization procedure is developed by coupling the rotor dynamic analysis, design sensitivity analysis and constrained optimization code CONMIN.
Shape sensitivity analysis of flutter response of a laminated wing
NASA Technical Reports Server (NTRS)
Bergen, Fred D.; Kapania, Rakesh K.
1988-01-01
A method is presented for calculating the shape sensitivity of a wing aeroelastic response with respect to changes in geometric shape. Yates' modified strip method is used in conjunction with Giles' equivalent plate analysis to predict the flutter speed, frequency, and reduced frequency of the wing. Three methods are used to calculate the sensitivity of the eigenvalue. The first method is purely a finite difference calculation of the eigenvalue derivative directly from the solution of the flutter problem corresponding to the two different values of the shape parameters. The second method uses an analytic expression for the eigenvalue sensitivities of a general complex matrix, where the derivatives of the aerodynamic, mass, and stiffness matrices are computed using a finite difference approximation. The third method also uses an analytic expression for the eigenvalue sensitivities, but the aerodynamic matrix is computed analytically. All three methods are found to be in good agreement with each other. The sensitivities of the eigenvalues were used to predict the flutter speed, frequency, and reduced frequency. These approximations were found to be in good agreement with those obtained using a complete reanalysis.
Graphical methods for the sensitivity analysis in discriminant analysis
Kim, Youngil; Anderson-Cook, Christine M.; Dae-Heung, Jang
2015-09-30
Similar to regression, many measures to detect influential data points in discriminant analysis have been developed. Many follow similar principles as the diagnostic measures used in linear regression in the context of discriminant analysis. Here we focus on the impact on the predicted classification posterior probability when a data point is omitted. The new method is intuitive and easily interpretative compared to existing methods. We also propose a graphical display to show the individual movement of the posterior probability of other data points when a specific data point is omitted. This enables the summaries to capture the overall pattern of the change.
Graphical methods for the sensitivity analysis in discriminant analysis
Kim, Youngil; Anderson-Cook, Christine M.; Dae-Heung, Jang
2015-09-30
Similar to regression, many measures to detect influential data points in discriminant analysis have been developed. Many follow similar principles as the diagnostic measures used in linear regression in the context of discriminant analysis. Here we focus on the impact on the predicted classification posterior probability when a data point is omitted. The new method is intuitive and easily interpretative compared to existing methods. We also propose a graphical display to show the individual movement of the posterior probability of other data points when a specific data point is omitted. This enables the summaries to capture the overall pattern ofmore » the change.« less
NASA Astrophysics Data System (ADS)
Tarasova, O. A.; Jalkanen, L.
2010-12-01
The WMO Global Atmosphere Watch (GAW) Programme is the only existing long-term international global programme providing an international coordinated framework for observations and analysis of the chemical composition of the atmosphere. GAW is a partnership involving contributors from about 80 countries. It includes a coordinated global network of observing stations along with supporting facilities (Central Facilities) and expert groups (Scientific Advisory Groups, SAGs and Expert Teams, ETs). Currently GAW coordinates activities and data from 27 Global Stations and a substantial number of Regional and Contributing Stations. Station information is available through the GAW Station Information System GAWSIS (http://gaw.empa.ch/gawsis/). There are six key groups of variables which are addressed by the GAW Programme, namely: ozone, reactive gases, greenhouse gases, aerosols, UV radiation and precipitation chemistry. GAW works to implement integrated observations unifying measurements from different platforms (ground based in situ and remote, balloons, aircraft and satellite) supported by modeling activities. GAW provides data for ozone assessments, Greenhouse Gas Bulletins, Ozone Bulletins and precipitation chemistry assessments published on a regular basis and for early warnings of changes in the chemical composition and related physical characteristics of the atmosphere. To ensure that observations can be used for global assessments, the GAW Programme has developed a Quality Assurance system. Five types of Central Facilities dedicated to the six groups of measurement variables are operated by WMO Members and form the basis of quality assurance and data archiving for the GAW global monitoring network. They include Central Calibration Laboratories (CCLs) that host primary standards (PS), Quality Assurance/Science Activity Centres (QA/SACs), World Calibration Centers (WCCs), Regional Calibration Centers (RCCs), and World Data Centers (WDCs) with responsibility for
Sensitivity Analysis and Optimal Control of Anthroponotic Cutaneous Leishmania
Zamir, Muhammad; Zaman, Gul; Alshomrani, Ali Saleh
2016-01-01
This paper is focused on the transmission dynamics and optimal control of Anthroponotic Cutaneous Leishmania. The threshold condition R0 for initial transmission of infection is obtained by next generation method. Biological sense of the threshold condition is investigated and discussed in detail. The sensitivity analysis of the reproduction number is presented and the most sensitive parameters are high lighted. On the basis of sensitivity analysis, some control strategies are introduced in the model. These strategies positively reduce the effect of the parameters with high sensitivity indices, on the initial transmission. Finally, an optimal control strategy is presented by taking into account the cost associated with control strategies. It is also shown that an optimal control exists for the proposed control problem. The goal of optimal control problem is to minimize, the cost associated with control strategies and the chances of infectious humans, exposed humans and vector population to become infected. Numerical simulations are carried out with the help of Runge-Kutta fourth order procedure. PMID:27505634
Global Analysis, Interpretation, and Modelling: First Science Conference
NASA Technical Reports Server (NTRS)
Sahagian, Dork
1995-01-01
Topics considered include: Biomass of termites and their emissions of methane and carbon dioxide - A global database; Carbon isotope discrimination during photosynthesis and the isotope ratio of respired CO2 in boreal forest ecosystems; Estimation of methane emission from rice paddies in mainland China; Climate and nitrogen controls on the geography and timescales of terrestrial biogeochemical cycling; Potential role of vegetation feedback in the climate sensitivity of high-latitude regions - A case study at 6000 years B.P.; Interannual variation of carbon exchange fluxes in terrestrial ecosystems; and Variations in modeled atmospheric transport of carbon dioxide and the consequences for CO2 inversions.
Redox Sensitivities of Global Cellular Cysteine Residues under Reductive and Oxidative Stress.
Araki, Kazutaka; Kusano, Hidewo; Sasaki, Naoyuki; Tanaka, Riko; Hatta, Tomohisa; Fukui, Kazuhiko; Natsume, Tohru
2016-08-01
The protein cysteine residue is one of the amino acids most susceptible to oxidative modifications, frequently caused by oxidative stress. Several applications have enabled cysteine-targeted proteomics analysis with simultaneous detection and quantitation. In this study, we employed a quantitative approach using a set of iodoacetyl-based cysteine reactive isobaric tags (iodoTMT) and evaluated the transient cellular oxidation ratio of free and reversibly modified cysteine thiols under DTT and hydrogen peroxide (H2O2) treatments. DTT treatment (1 mM for 5 min) reduced most cysteine thiols, irrespective of their cellular localizations. It also caused some unique oxidative shifts, including for peroxiredoxin 2 (PRDX2), uroporphyrinogen decarboxylase (UROD), and thioredoxin (TXN), proteins reportedly affected by cellular reactive oxygen species production. Modest H2O2 treatment (50 μM for 5 min) did not cause global oxidations but instead had apparently reductive effects. Moreover, with H2O2, significant oxidative shifts were observed only in redox active proteins, like PRDX2, peroxiredoxin 1 (PRDX1), TXN, and glyceraldehyde 3-phosphate dehydrogenase (GAPDH). Overall, our quantitative data illustrated both H2O2- and reduction-mediated cellular responses, whereby while redox homeostasis is maintained, highly reactive thiols can potentiate the specific, rapid cellular signaling to counteract acute redox stress. PMID:27350002
Objective analysis of the ARM IOP data: method and sensitivity
Cedarwall, R; Lin, J L; Xie, S C; Yio, J J; Zhang, M H
1999-04-01
Motivated by the need of to obtain accurate objective analysis of field experimental data to force physical parameterizations in numerical models, this paper -first reviews the existing objective analysis methods and interpolation schemes that are used to derive atmospheric wind divergence, vertical velocity, and advective tendencies. Advantages and disadvantages of each method are discussed. It is shown that considerable uncertainties in the analyzed products can result from the use of different analysis schemes and even more from different implementations of a particular scheme. The paper then describes a hybrid approach to combine the strengths of the regular grid method and the line-integral method, together with a variational constraining procedure for the analysis of field experimental data. In addition to the use of upper air data, measurements at the surface and at the top-of-the-atmosphere are used to constrain the upper air analysis to conserve column-integrated mass, water, energy, and momentum. Analyses are shown for measurements taken in the Atmospheric Radiation Measurement Programs (ARM) July 1995 Intensive Observational Period (IOP). Sensitivity experiments are carried out to test the robustness of the analyzed data and to reveal the uncertainties in the analysis. It is shown that the variational constraining process significantly reduces the sensitivity of the final data products.
Sensitivity analysis of transport modeling in a fractured gneiss aquifer
NASA Astrophysics Data System (ADS)
Abdelaziz, Ramadan; Merkel, Broder J.
2015-03-01
Modeling solute transport in fractured aquifers is still challenging for scientists and engineers. Tracer tests are a powerful tool to investigate fractured aquifers with complex geometry and variable heterogeneity. This research focuses on obtaining hydraulic and transport parameters from an experimental site with several wells. At the site, a tracer test with NaCl was performed under natural gradient conditions. Observed concentrations of tracer test were used to calibrate a conservative solute transport model by inverse modeling based on UCODE2013, MODFLOW, and MT3DMS. In addition, several statistics are employed for sensitivity analysis. Sensitivity analysis results indicate that hydraulic conductivity and immobile porosity play important role in the late arrive for breakthrough curve. The results proved that the calibrated model fits well with the observed data set.
Least Squares Shadowing sensitivity analysis of chaotic limit cycle oscillations
NASA Astrophysics Data System (ADS)
Wang, Qiqi; Hu, Rui; Blonigan, Patrick
2014-06-01
The adjoint method, among other sensitivity analysis methods, can fail in chaotic dynamical systems. The result from these methods can be too large, often by orders of magnitude, when the result is the derivative of a long time averaged quantity. This failure is known to be caused by ill-conditioned initial value problems. This paper overcomes this failure by replacing the initial value problem with the well-conditioned "least squares shadowing (LSS) problem". The LSS problem is then linearized in our sensitivity analysis algorithm, which computes a derivative that converges to the derivative of the infinitely long time average. We demonstrate our algorithm in several dynamical systems exhibiting both periodic and chaotic oscillations.
Control of a mechanical aeration process via topological sensitivity analysis
NASA Astrophysics Data System (ADS)
Abdelwahed, M.; Hassine, M.; Masmoudi, M.
2009-06-01
The topological sensitivity analysis method gives the variation of a criterion with respect to the creation of a small hole in the domain. In this paper, we use this method to control the mechanical aeration process in eutrophic lakes. A simplified model based on incompressible Navier-Stokes equations is used, only considering the liquid phase, which is the dominant one. The injected air is taken into account through local boundary conditions for the velocity, on the injector holes. A 3D numerical simulation of the aeration effects is proposed using a mixed finite element method. In order to generate the best motion in the fluid for aeration purposes, the optimization of the injector location is considered. The main idea is to carry out topological sensitivity analysis with respect to the insertion of an injector. Finally, a topological optimization algorithm is proposed and some numerical results, showing the efficiency of our approach, are presented.
Leek, E Charles; Roberts, Mark; Oliver, Zoe J; Cristino, Filipe; Pegna, Alan J
2016-08-01
Here we investigated the time course underlying differential processing of local and global shape information during the perception of complex three-dimensional (3D) objects. Observers made shape matching judgments about pairs of sequentially presented multi-part novel objects. Event-related potentials (ERPs) were used to measure perceptual sensitivity to 3D shape differences in terms of local part structure and global shape configuration - based on predictions derived from hierarchical structural description models of object recognition. There were three types of different object trials in which stimulus pairs (1) shared local parts but differed in global shape configuration; (2) contained different local parts but shared global configuration or (3) shared neither local parts nor global configuration. Analyses of the ERP data showed differential amplitude modulation as a function of shape similarity as early as the N1 component between 146-215ms post-stimulus onset. These negative amplitude deflections were more similar between objects sharing global shape configuration than local part structure. Differentiation among all stimulus types was reflected in N2 amplitude modulations between 276-330ms. sLORETA inverse solutions showed stronger involvement of left occipitotemporal areas during the N1 for object discrimination weighted towards local part structure. The results suggest that the perception of 3D object shape involves parallel processing of information at local and global scales. This processing is characterised by relatively slow derivation of 'fine-grained' local shape structure, and fast derivation of 'coarse-grained' global shape configuration. We propose that the rapid early derivation of global shape attributes underlies the observed patterns of N1 amplitude modulations. PMID:27396674
Recent advances in steady compressible aerodynamic sensitivity analysis
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Newman, Perry A.; Hou, Gene J.-W.; Jones, Henry E.
1992-01-01
Sensitivity analysis methods are classified as belonging to either of the two broad categories: the discrete (quasi-analytical) approach and the continuous approach. The two approaches differ by the order in which discretization and differentiation of the governing equations and boundary conditions is undertaken. The discussion focuses on the discrete approach. Basic equations are presented, and the major difficulties are reviewed in some detail, as are the proposed solutions. Recent research activity concerned with the continuous approach is also discussed.
Sensitivity Analysis of Inverse Methods in Eddy Current Pit Characterization
NASA Astrophysics Data System (ADS)
Aldrin, John C.; Sabbagh, Harold A.; Murphy, R. Kim; Sabbagh, Elias H.; Knopp, Jeremy S.
2010-02-01
A sensitivity analysis was performed for a pit characterization problem to quantify the impact of potential sources for variation on the performance of inverse methods. Certain data processing steps, including careful feature extraction, background clutter removal and compensation for variation in the scan step size through the tubing, were found to be critical to achieve good estimates of the pit depth and diameter. Variance studied in model probe dimensions did not adversely affect performance.
Performance, robustness and sensitivity analysis of the nonlinear tuned vibration absorber
NASA Astrophysics Data System (ADS)
Detroux, T.; Habib, G.; Masset, L.; Kerschen, G.
2015-08-01
The nonlinear tuned vibration absorber (NLTVA) is a recently developed nonlinear absorber which generalizes Den Hartog's equal peak method to nonlinear systems. If the purposeful introduction of nonlinearity can enhance system performance, it can also give rise to adverse dynamical phenomena, including detached resonance curves and quasiperiodic regimes of motion. Through the combination of numerical continuation of periodic solutions, bifurcation detection and tracking, and global analysis, the present study identifies boundaries in the NLTVA parameter space delimiting safe, unsafe and unacceptable operations. The sensitivity of these boundaries to uncertainty in the NLTVA parameters is also investigated.
Trame, MN; Lesko, LJ
2015-01-01
A systems pharmacology model typically integrates pharmacokinetic, biochemical network, and systems biology concepts into a unifying approach. It typically consists of a large number of parameters and reaction species that are interlinked based upon the underlying (patho)physiology and the mechanism of drug action. The more complex these models are, the greater the challenge of reliably identifying and estimating respective model parameters. Global sensitivity analysis provides an innovative tool that can meet this challenge. CPT Pharmacometrics Syst. Pharmacol. (2015) 4, 69–79; doi:10.1002/psp4.6; published online 25 February 2015 PMID:27548289
Global Ocean Sensitivity to Local Geologically Short-Term Variability of Freshwater Fluxes
NASA Astrophysics Data System (ADS)
Seidov, D.; Haupt, B. J.
2004-12-01
The geologic record and computer modeling indicate that the transitions between cold and warm climates during the last deglaciation , driven by internal climate dynamics, were geologically very fast, lasting for only decades or shorter. The THC is, perhaps, the only viable candidate for driving these kinds of abrupt changes. Current perception of how the THC may become an agent of abrupt climate change is that the THC is rather sensitive to changes in freshwater fluxes in the high-latitudes, also known as major meltwater events. Our recent numerical experiments challenge the idea of the high-latitudinal meltwater events as the only possible cause of THC alteration. These experiments suggest that the inter-basin sea surface salinity contrasts caused by disparity of freshwater fluxes over the world ocean can also be a very potent factor in THC dynamics. To address the role of changes in both high-latitudinal and inter-basin freshwater fluxes in altering the global THC, we performed several simple numerical experiments. First, we ran the atmospheric control experiment using the NCAR Community Climate Model (CCM) with observed sea surface temperature (SST) and salinity to get the present-day control atmospheric state, that is, the wind stress, SST, and freshwater fluxes across the sea surfaces. Next, we ran the oceanic control experiment using the GFDL Modular Ocean Model (MOM) with these sea surface conditions from the CCM. In the first series of experiments, we specified idealized anomalies of freshwater fluxes in the northern North Atlantic, the Southern Ocean, and the subtropical North Atlantic and North Pacific. These experiments gave us insight on the relative importance of high-latitudinal and inter-basin short-term fluctuations in freshwater balance for the THC dynamics. In the second series of experiments, we simulated the disruption of the freshwater regime in the northern North Atlantic caused by freshwater floods from Lake Agassiz (a glacial lake that
NASA Technical Reports Server (NTRS)
McGhee, David S.; Peck, Jeff A.; McDonald, Emmett J.
2012-01-01
This paper examines Probabilistic Sensitivity Analysis (PSA) methods and tools in an effort to understand their utility in vehicle loads and dynamic analysis. Specifically, this study addresses how these methods may be used to establish limits on payload mass and cg location and requirements on adaptor stiffnesses while maintaining vehicle loads and frequencies within established bounds. To this end, PSA methods and tools are applied to a realistic, but manageable, integrated launch vehicle analysis where payload and payload adaptor parameters are modeled as random variables. This analysis is used to study both Regional Response PSA (RRPSA) and Global Response PSA (GRPSA) methods, with a primary focus on sampling based techniques. For contrast, some MPP based approaches are also examined.
Sensitivity Analysis of Launch Vehicle Debris Risk Model
NASA Technical Reports Server (NTRS)
Gee, Ken; Lawrence, Scott L.
2010-01-01
As part of an analysis of the loss of crew risk associated with an ascent abort system for a manned launch vehicle, a model was developed to predict the impact risk of the debris resulting from an explosion of the launch vehicle on the crew module. The model consisted of a debris catalog describing the number, size and imparted velocity of each piece of debris, a method to compute the trajectories of the debris and a method to calculate the impact risk given the abort trajectory of the crew module. The model provided a point estimate of the strike probability as a function of the debris catalog, the time of abort and the delay time between the abort and destruction of the launch vehicle. A study was conducted to determine the sensitivity of the strike probability to the various model input parameters and to develop a response surface model for use in the sensitivity analysis of the overall ascent abort risk model. The results of the sensitivity analysis and the response surface model are presented in this paper.
Sensitivity analysis of coexistence in ecological communities: theory and application.
Barabás, György; Pásztor, Liz; Meszéna, Géza; Ostling, Annette
2014-12-01
Sensitivity analysis, the study of how ecological variables of interest respond to changes in external conditions, is a theoretically well-developed and widely applied approach in population ecology. Though the application of sensitivity analysis to predicting the response of species-rich communities to disturbances also has a long history, derivation of a mathematical framework for understanding the factors leading to robust coexistence has only been a recent undertaking. Here we suggest that this development opens up a new perspective, providing advances ranging from the applied to the theoretical. First, it yields a framework to be applied in specific cases for assessing the extinction risk of community modules in the face of environmental change. Second, it can be used to determine trait combinations allowing for coexistence that is robust to environmental variation, and limits to diversity in the presence of environmental variation, for specific community types. Third, it offers general insights into the nature of communities that are robust to environmental variation. We apply recent community-level extensions of mathematical sensitivity analysis to example models for illustration. We discuss the advantages and limitations of the method, and some of the empirical questions the theoretical framework could help answer. PMID:25252135
Sensitivity analysis and approximation methods for general eigenvalue problems
NASA Technical Reports Server (NTRS)
Murthy, D. V.; Haftka, R. T.
1986-01-01
Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.
Sensitivity analysis in multiple imputation in effectiveness studies of psychotherapy
Crameri, Aureliano; von Wyl, Agnes; Koemeda, Margit; Schulthess, Peter; Tschuschke, Volker
2015-01-01
The importance of preventing and treating incomplete data in effectiveness studies is nowadays emphasized. However, most of the publications focus on randomized clinical trials (RCT). One flexible technique for statistical inference with missing data is multiple imputation (MI). Since methods such as MI rely on the assumption of missing data being at random (MAR), a sensitivity analysis for testing the robustness against departures from this assumption is required. In this paper we present a sensitivity analysis technique based on posterior predictive checking, which takes into consideration the concept of clinical significance used in the evaluation of intra-individual changes. We demonstrate the possibilities this technique can offer with the example of irregular longitudinal data collected with the Outcome Questionnaire-45 (OQ-45) and the Helping Alliance Questionnaire (HAQ) in a sample of 260 outpatients. The sensitivity analysis can be used to (1) quantify the degree of bias introduced by missing not at random data (MNAR) in a worst reasonable case scenario, (2) compare the performance of different analysis methods for dealing with missing data, or (3) detect the influence of possible violations to the model assumptions (e.g., lack of normality). Moreover, our analysis showed that ratings from the patient's and therapist's version of the HAQ could significantly improve the predictive value of the routine outcome monitoring based on the OQ-45. Since analysis dropouts always occur, repeated measurements with the OQ-45 and the HAQ analyzed with MI are useful to improve the accuracy of outcome estimates in quality assurance assessments and non-randomized effectiveness studies in the field of outpatient psychotherapy. PMID:26283989
Stability investigations of airfoil flow by global analysis
NASA Technical Reports Server (NTRS)
Morzynski, Marek; Thiele, Frank
1992-01-01
As the result of global, non-parallel flow stability analysis the single value of the disturbance growth-rate and respective frequency is obtained. This complex value characterizes the stability of the whole flow configuration and is not referred to any particular flow pattern. The global analysis assures that all the flow elements (wake, boundary and shear layer) are taken into account. The physical phenomena connected with the wake instability are properly reproduced by the global analysis. This enhances the investigations of instability of any 2-D flows, including ones in which the boundary layer instability effects are known to be of dominating importance. Assuming fully 2-D disturbance form, the global linear stability problem is formulated. The system of partial differential equations is solved for the eigenvalues and eigenvectors. The equations, written in the pure stream function formulation, are discretized via FDM using a curvilinear coordinate system. The complex eigenvalues and corresponding eigenvectors are evaluated by an iterative method. The investigations performed for various Reynolds numbers emphasize that the wake instability develops into the Karman vortex street. This phenomenon is shown to be connected with the first mode obtained from the non-parallel flow stability analysis. The higher modes are reflecting different physical phenomena as for example Tollmien-Schlichting waves, originating in the boundary layer and having the tendency to emerge as instabilities for the growing Reynolds number. The investigations are carried out for a circular cylinder, oblong ellipsis and airfoil. It is shown that the onset of the wake instability, the waves in the boundary layer, the shear layer instability are different solutions of the same eigenvalue problem, formulated using the non-parallel theory. The analysis offers large potential possibilities as the generalization of methods used till now for the stability analysis.
Probabilistic constrained load flow based on sensitivity analysis
Karakatsanis, T.S.; Hatziargyriou, N.D. )
1994-11-01
This paper presents a method for network constrained setting of control variables based on probabilistic load flow analysis. The method determines operating constraint violations for a whole planning period together with the probability of each violation. An iterative algorithm is subsequently employed providing adjustments of the control variables based on sensitivity analysis of the constrained variables with respect to the control variables. The method is applied to the IEEE 14 busbar system and to a realistic model of the Hellenic Interconnected system indicating its suitability for short-term operational planning applications.
Sensitivity of Forecast Skill to Different Objective Analysis Schemes
NASA Technical Reports Server (NTRS)
Baker, W. E.
1979-01-01
Numerical weather forecasts are characterized by rapidly declining skill in the first 48 to 72 h. Recent estimates of the sources of forecast error indicate that the inaccurate specification of the initial conditions contributes substantially to this error. The sensitivity of the forecast skill to the initial conditions is examined by comparing a set of real-data experiments whose initial data were obtained with two different analysis schemes. Results are presented to emphasize the importance of the objective analysis techniques used in the assimilation of observational data.
Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models
NASA Astrophysics Data System (ADS)
Rakovec, O.; Hill, M. C.; Clark, M. P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.
2014-01-01
This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based "local" methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative "bucket-style" hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.
Sensitivity Analysis for Atmospheric Infrared Sounder (AIRS) CO2 Retrieval
NASA Technical Reports Server (NTRS)
Gat, Ilana
2012-01-01
The Atmospheric Infrared Sounder (AIRS) is a thermal infrared sensor able to retrieve the daily atmospheric state globally for clear as well as partially cloudy field-of-views. The AIRS spectrometer has 2378 channels sensing from 15.4 micrometers to 3.7 micrometers, of which a small subset in the 15 micrometers region has been selected, to date, for CO2 retrieval. To improve upon the current retrieval method, we extended the retrieval calculations to include a prior estimate component and developed a channel ranking system to optimize the channels and number of channels used. The channel ranking system uses a mathematical formalism to rapidly process and assess the retrieval potential of large numbers of channels. Implementing this system, we identifed a larger optimized subset of AIRS channels that can decrease retrieval errors and minimize the overall sensitivity to other iridescent contributors, such as water vapor, ozone, and atmospheric temperature. This methodology selects channels globally by accounting for the latitudinal, longitudinal, and seasonal dependencies of the subset. The new methodology increases accuracy in AIRS CO2 as well as other retrievals and enables the extension of retrieved CO2 vertical profiles to altitudes ranging from the lower troposphere to upper stratosphere. The extended retrieval method for CO2 vertical profile estimation using a maximum-likelihood estimation method. We use model data to demonstrate the beneficial impact of the extended retrieval method using the new channel ranking system on CO2 retrieval.
NASA Astrophysics Data System (ADS)
Pappas, Christoforos; Fatichi, Simone; Leuzinger, Sebastian; Wolf, Annett; Burlando, Paolo
2013-06-01
vegetation models have been widely used for analyzing ecosystem dynamics and their interactions with climate. Their performance has been tested extensively against observations and by model intercomparison studies. In the present analysis, Lund-Potsdam-Jena General Ecosystem Simulator (LPJ-GUESS), a state-of-the-art ecosystem model, was evaluated by performing a global sensitivity analysis. The study aims at examining potential model limitations, particularly with regard to long-term applications. A detailed sensitivity analysis based on variance decomposition is presented to investigate structural model assumptions and to highlight processes and parameters that cause the highest variability in the output. First- and total-order sensitivity indices were calculated for selected parameters using Sobol's methodology. In order to elucidate the role of climate on model sensitivity, different climate forcings were used based on observations from Switzerland. The results clearly indicate a very high sensitivity of LPJ-GUESS to photosynthetic parameters. Intrinsic quantum efficiency alone is able to explain about 60% of the variability in vegetation carbon fluxes and pools for a wide range of climate forcings. Processes related to light harvesting were also found to be important together with parameters affecting forest structure (growth, establishment, and mortality). The model shows minor sensitivity to hydrological and soil texture parameters, questioning its skills in representing spatial vegetation heterogeneity at regional or watershed scales. In the light of these results, we discuss the deficiencies of LPJ-GUESS and possibly that of other, structurally similar, dynamic vegetation models and we highlight potential directions for further model improvements.
Advanced Sensitivity Analysis of the Danish Eulerian Model in Parallel and Grid Environment
NASA Astrophysics Data System (ADS)
Ostromsky, Tz.; Dimov, I.; Marinov, P.; Georgieva, R.; Zlatev, Z.
2011-11-01
A 3-stage sensitivity analysis approach, based on analysis of variances technique for calculating Sobol's global sensitivity indices and computationaly efficient Monte Carlo integration techniques is considered and applied to a large-scale air pollurion model, the Danish Eulerian Model. On the first stage it is necessary to carry out a set of computationally expensive numerical experiments and to extract the necessary sensitivity analysis data. The output is used to construct mesh-functions of ozone concentration ratios to be used in the next stages for evaluating the necessary variances. Here we use a specially adapted for the purpose version of the model, called SA-DEM. It has been successfully implemented and run on the most powerful parallel supercomputer in Bulgaria—IBM Blue Gene/P. A more advanced version, capable of using efficiently the full capacity of this powerful supercomputer, is described in this paper, followed by some performance analysis of the numerical experiments. Another source of computational power for solving such a tuff numerical problem is the computational grid. That is why another version of SA-DEM has been adapted to exploit efficiently the capacity of our Grid infrastructure. The numerical results from both the parallel and Grid implementation are presented, compared and analysed.
Sensitivity analysis of fine sediment models using heterogeneous data
NASA Astrophysics Data System (ADS)
Kamel, A. M. Yousif; Bhattacharya, B.; El Serafy, G. Y.; van Kessel, T.; Solomatine, D. P.
2012-04-01
Sediments play an important role in many aquatic systems. Their transportation and deposition has significant implication on morphology, navigability and water quality. Understanding the dynamics of sediment transportation in time and space is therefore important in drawing interventions and making management decisions. This research is related to the fine sediment dynamics in the Dutch coastal zone, which is subject to human interference through constructions, fishing, navigation, sand mining, etc. These activities do affect the natural flow of sediments and sometimes lead to environmental concerns or affect the siltation rates in harbours and fairways. Numerical models are widely used in studying fine sediment processes. Accuracy of numerical models depends upon the estimation of model parameters through calibration. Studying the model uncertainty related to these parameters is important in improving the spatio-temporal prediction of suspended particulate matter (SPM) concentrations, and determining the limits of their accuracy. This research deals with the analysis of a 3D numerical model of North Sea covering the Dutch coast using the Delft3D modelling tool (developed at Deltares, The Netherlands). The methodology in this research was divided into three main phases. The first phase focused on analysing the performance of the numerical model in simulating SPM concentrations near the Dutch coast by comparing the model predictions with SPM concentrations estimated from NASA's MODIS sensors at different time scales. The second phase focused on carrying out a sensitivity analysis of model parameters. Four model parameters were identified for the uncertainty and sensitivity analysis: the sedimentation velocity, the critical shear stress above which re-suspension occurs, the shields shear stress for re-suspension pick-up, and the re-suspension pick-up factor. By adopting different values of these parameters the numerical model was run and a comparison between the
Sensitivity Analysis of a Pharmacokinetic Model of Vaginal Anti-HIV Microbicide Drug Delivery.
Jarrett, Angela M; Gao, Yajing; Hussaini, M Yousuff; Cogan, Nicholas G; Katz, David F
2016-05-01
Uncertainties in parameter values in microbicide pharmacokinetics (PK) models confound the models' use in understanding the determinants of drug delivery and in designing and interpreting dosing and sampling in PK studies. A global sensitivity analysis (Sobol' indices) was performed for a compartmental model of the pharmacokinetics of gel delivery of tenofovir to the vaginal mucosa. The model's parameter space was explored to quantify model output sensitivities to parameters characterizing properties for the gel-drug product (volume, drug transport, initial loading) and host environment (thicknesses of the mucosal epithelium and stroma and the role of ambient vaginal fluid in diluting gel). Greatest sensitivities overall were to the initial drug concentration in gel, gel-epithelium partition coefficient for drug, and rate constant for gel dilution by vaginal fluid. Sensitivities for 3 PK measures of drug concentration values were somewhat different than those for the kinetic PK measure. Sensitivities in the stromal compartment (where tenofovir acts against host cells) and a simulated biopsy also depended on thicknesses of epithelium and stroma. This methodology and results here contribute an approach to help interpret uncertainties in measures of vaginal microbicide gel properties and their host environment. In turn, this will inform rational gel design and optimization. PMID:27012224
Sensitivity analysis of the GNSS derived Victoria plate motion
NASA Astrophysics Data System (ADS)
Apolinário, João; Fernandes, Rui; Bos, Machiel
2014-05-01
estimated trend (Williams 2003, Langbein 2012). Finally, our preferable angular velocity estimation is used to evaluate the consequences on the kinematics of the Victoria block, namely the magnitude and azimuth of the relative motions with respect to the Nubia and Somalia plates and their tectonic implications. References Agnew, D. C. (2013). Realistic simulations of geodetic network data: The Fakenet package, Seismol. Res. Lett., 84 , 426-432, doi:10.1785/0220120185. Blewitt, G. & Lavallee, D., (2002). Effect of annual signals on geodetic velocity, J. geophys. Res., 107(B7), doi:10.1029/2001JB000570. Bos, M.S., R.M.S. Fernandes, S. Williams, L. Bastos (2012) Fast Error Analysis of Continuous GNSS Observations with Missing Data, Journal of Geodesy, doi: 10.1007/s00190-012-0605-0. Bos, M.S., L. Bastos, R.M.S. Fernandes, (2009). The influence of seasonal signals on the estimation of the tectonic motion in short continuous GPS time-series, J. of Geodynamics, j.jog.2009.10.005. Fernandes, R.M.S., J. M. Miranda, D. Delvaux, D. S. Stamps and E. Saria (2013). Re-evaluation of the kinematics of Victoria Block using continuous GNSS data, Geophysical Journal International, doi:10.1093/gji/ggs071. Langbein, J. (2012). Estimating rate uncertainty with maximum likelihood: differences between power-law and flicker-random-walk models, Journal of Geodesy, Volume 86, Issue 9, pp 775-783, Williams, S. D. P. (2003). Offsets in Global Positioning System time series, J. Geophys. Res., 108, 2310, doi:10.1029/2002JB002156, B6.
Analysis of frequency characteristics and sensitivity of compliant mechanisms
NASA Astrophysics Data System (ADS)
Liu, Shanzeng; Dai, Jiansheng; Li, Aimin; Sun, Zhaopeng; Feng, Shizhe; Cao, Guohua
2016-03-01
Based on a modified pseudo-rigid-body model, the frequency characteristics and sensitivity of the large-deformation compliant mechanism are studied. Firstly, the pseudo-rigid-body model under the static and kinetic conditions is modified to enable the modified pseudo-rigid-body model to be more suitable for the dynamic analysis of the compliant mechanism. Subsequently, based on the modified pseudo-rigid-body model, the dynamic equations of the ordinary compliant four-bar mechanism are established using the analytical mechanics. Finally, in combination with the finite element analysis software ANSYS, the frequency characteristics and sensitivity of the compliant mechanism are analyzed by taking the compliant parallel-guiding mechanism and the compliant bistable mechanism as examples. From the simulation results, the dynamic characteristics of compliant mechanism are relatively sensitive to the structure size, section parameter, and characteristic parameter of material on mechanisms. The results could provide great theoretical significance and application values for the structural optimization of compliant mechanisms, the improvement of their dynamic properties and the expansion of their application range.
Sensitivity-analysis techniques: self-teaching curriculum
Iman, R.L.; Conover, W.J.
1982-06-01
This self teaching curriculum on sensitivity analysis techniques consists of three parts: (1) Use of the Latin Hypercube Sampling Program (Iman, Davenport and Ziegler, Latin Hypercube Sampling (Program User's Guide), SAND79-1473, January 1980); (2) Use of the Stepwise Regression Program (Iman, et al., Stepwise Regression with PRESS and Rank Regression (Program User's Guide) SAND79-1472, January 1980); and (3) Application of the procedures to sensitivity and uncertainty analyses of the groundwater transport model MWFT/DVM (Campbell, Iman and Reeves, Risk Methodology for Geologic Disposal of Radioactive Waste - Transport Model Sensitivity Analysis; SAND80-0644, NUREG/CR-1377, June 1980: Campbell, Longsine, and Reeves, The Distributed Velocity Method of Solving the Convective-Dispersion Equation, SAND80-0717, NUREG/CR-1376, July 1980). This curriculum is one in a series developed by Sandia National Laboratories for transfer of the capability to use the technology developed under the NRC funded High Level Waste Methodology Development Program.
LSENS, The NASA Lewis Kinetics and Sensitivity Analysis Code
NASA Technical Reports Server (NTRS)
Radhakrishnan, K.
2000-01-01
A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS (the NASA Lewis kinetics and sensitivity analysis code), are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include: static system; steady, one-dimensional, inviscid flow; incident-shock initiated reaction in a shock tube; and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method (LSODE, the Livermore Solver for Ordinary Differential Equations), which works efficiently for the extremes of very fast and very slow reactions, is used to solve the "stiff" ordinary differential equation systems that arise in chemical kinetics. For static reactions, the code uses the decoupled direct method to calculate sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters. Solution methods for the equilibrium and post-shock conditions and for perfectly stirred reactor problems are either adapted from or based on the procedures built into the NASA code CEA (Chemical Equilibrium and Applications).
Hyperspectral data analysis procedures with reduced sensitivity to noise
NASA Technical Reports Server (NTRS)
Landgrebe, David A.
1993-01-01
Multispectral sensor systems have become steadily improved over the years in their ability to deliver increased spectral detail. With the advent of hyperspectral sensors, including imaging spectrometers, this technology is in the process of taking a large leap forward, thus providing the possibility of enabling delivery of much more detailed information. However, this direction of development has drawn even more attention to the matter of noise and other deleterious effects in the data, because reducing the fundamental limitations of spectral detail on information collection raises the limitations presented by noise to even greater importance. Much current effort in remote sensing research is thus being devoted to adjusting the data to mitigate the effects of noise and other deleterious effects. A parallel approach to the problem is to look for analysis approaches and procedures which have reduced sensitivity to such effects. We discuss some of the fundamental principles which define analysis algorithm characteristics providing such reduced sensitivity. One such analysis procedure including an example analysis of a data set is described, illustrating this effect.
A global optimization approach to multi-polarity sentiment analysis.
Li, Xinmiao; Li, Jing; Wu, Yukeng
2015-01-01
Following the rapid development of social media, sentiment analysis has become an important social media mining technique. The performance of automatic sentiment analysis primarily depends on feature selection and sentiment classification. While information gain (IG) and support vector machines (SVM) are two important techniques, few studies have optimized both approaches in sentiment analysis. The effectiveness of applying a global optimization approach to sentiment analysis remains unclear. We propose a global optimization-based sentiment analysis (PSOGO-Senti) approach to improve sentiment analysis with IG for feature selection and SVM as the learning engine. The PSOGO-Senti approach utilizes a particle swarm optimization algorithm to obtain a global optimal combination of feature dimensions and parameters in the SVM. We evaluate the PSOGO-Senti model on two datasets from different fields. The experimental results showed that the PSOGO-Senti model can improve binary and multi-polarity Chinese sentiment analysis. We compared the optimal feature subset selected by PSOGO-Senti with the features in the sentiment dictionary. The results of this comparison indicated that PSOGO-Senti can effectively remove redundant and noisy features and can select a domain-specific feature subset with a higher-explanatory power for a particular sentiment analysis task. The experimental results showed that the PSOGO-Senti approach is effective and robust for sentiment analysis tasks in different domains. By comparing the improvements of two-polarity, three-polarity and five-polarity sentiment analysis results, we found that the five-polarity sentiment analysis delivered the largest improvement. The improvement of the two-polarity sentiment analysis was the smallest. We conclude that the PSOGO-Senti achieves higher improvement for a more complicated sentiment analysis task. We also compared the results of PSOGO-Senti with those of the genetic algorithm (GA) and grid search method. From
A Global Optimization Approach to Multi-Polarity Sentiment Analysis
Li, Xinmiao; Li, Jing; Wu, Yukeng
2015-01-01
Following the rapid development of social media, sentiment analysis has become an important social media mining technique. The performance of automatic sentiment analysis primarily depends on feature selection and sentiment classification. While information gain (IG) and support vector machines (SVM) are two important techniques, few studies have optimized both approaches in sentiment analysis. The effectiveness of applying a global optimization approach to sentiment analysis remains unclear. We propose a global optimization-based sentiment analysis (PSOGO-Senti) approach to improve sentiment analysis with IG for feature selection and SVM as the learning engine. The PSOGO-Senti approach utilizes a particle swarm optimization algorithm to obtain a global optimal combination of feature dimensions and parameters in the SVM. We evaluate the PSOGO-Senti model on two datasets from different fields. The experimental results showed that the PSOGO-Senti model can improve binary and multi-polarity Chinese sentiment analysis. We compared the optimal feature subset selected by PSOGO-Senti with the features in the sentiment dictionary. The results of this comparison indicated that PSOGO-Senti can effectively remove redundant and noisy features and can select a domain-specific feature subset with a higher-explanatory power for a particular sentiment analysis task. The experimental results showed that the PSOGO-Senti approach is effective and robust for sentiment analysis tasks in different domains. By comparing the improvements of two-polarity, three-polarity and five-polarity sentiment analysis results, we found that the five-polarity sentiment analysis delivered the largest improvement. The improvement of the two-polarity sentiment analysis was the smallest. We conclude that the PSOGO-Senti achieves higher improvement for a more complicated sentiment analysis task. We also compared the results of PSOGO-Senti with those of the genetic algorithm (GA) and grid search method. From
Mechanical Performance and Parameter Sensitivity Analysis of 3D Braided Composites Joints
Wu, Yue; Nan, Bo; Chen, Liang
2014-01-01
3D braided composite joints are the important components in CFRP truss, which have significant influence on the reliability and lightweight of structures. To investigate the mechanical performance of 3D braided composite joints, a numerical method based on the microscopic mechanics is put forward, the modeling technologies, including the material constants selection, element type, grid size, and the boundary conditions, are discussed in detail. Secondly, a method for determination of ultimate bearing capacity is established, which can consider the strength failure. Finally, the effect of load parameters, geometric parameters, and process parameters on the ultimate bearing capacity of joints is analyzed by the global sensitivity analysis method. The results show that the main pipe diameter thickness ratio γ, the main pipe diameter D, and the braided angle α are sensitive to the ultimate bearing capacity N. PMID:25121121
Mechanical performance and parameter sensitivity analysis of 3D braided composites joints.
Wu, Yue; Nan, Bo; Chen, Liang
2014-01-01
3D braided composite joints are the important components in CFRP truss, which have significant influence on the reliability and lightweight of structures. To investigate the mechanical performance of 3D braided composite joints, a numerical method based on the microscopic mechanics is put forward, the modeling technologies, including the material constants selection, element type, grid size, and the boundary conditions, are discussed in detail. Secondly, a method for determination of ultimate bearing capacity is established, which can consider the strength failure. Finally, the effect of load parameters, geometric parameters, and process parameters on the ultimate bearing capacity of joints is analyzed by the global sensitivity analysis method. The results show that the main pipe diameter thickness ratio γ, the main pipe diameter D, and the braided angle α are sensitive to the ultimate bearing capacity N. PMID:25121121
Life cycle assessment on biogas production from straw and its sensitivity analysis.
Wang, Qiao-Li; Li, Wei; Gao, Xiang; Li, Su-Jing
2016-02-01
This study aims to investigate the synthetically environmental impacts and Global Warming Potentials (GWPs) of straw-based biogas production process via cradle-to-gate life cycle assessment (LCA) technique. Eco-indicator 99 (H) and IPCC 2007 GWP with three time horizons are utilized. The results indicate that the biogas production process shows beneficial effect on synthetic environment and is harmful to GWPs. Its harmful effects on GWPs are strengthened with time. Usage of gas-fired power which burns the self-produced natural gas (NG) can create a more sustainable process. Moreover, sensitivity analysis indicated that total electricity consumption and CO2 absorbents in purification unit have the largest sensitivity to the environment. Hence, more efforts should be made on more efficient use of electricity and wiser selection of CO2 absorbent. PMID:26649899
Treatment of body forces in boundary element design sensitivity analysis
NASA Technical Reports Server (NTRS)
Saigal, Sunil; Kane, James H.; Aithal, R.; Cheng, Jizu
1989-01-01
The inclusion of body forces has received a good deal of attention in boundary element research. The consideration of such forces is essential in the desgin of high performance components such as fan and turbine disks in a gas turbine engine. Due to their critical performance requirements, optimal shapes are often desired for these components. The boundary element method (BEM) offers the possibility of being an efficient method for such iterative analysis as shape optimization. The implicit-differentiation of the boundary integral equations is performed to obtain the sensitivity equations. The body forces are accounted for by either the particular integrals for uniform body forces or by a surface integration for non-uniform body forces. The corresponding sensitivity equations for both these cases are presented. The validity of present formulations is established through a close agreement with exact analytical results.
Sensitivity analysis for nonrandom dropout: a local influence approach.
Verbeke, G; Molenberghs, G; Thijs, H; Lesaffre, E; Kenward, M G
2001-03-01
Diggle and Kenward (1994, Applied Statistics 43, 49-93) proposed a selection model for continuous longitudinal data subject to nonrandom dropout. It has provoked a large debate about the role for such models. The original enthusiasm was followed by skepticism about the strong but untestable assumptions on which this type of model invariably rests. Since then, the view has emerged that these models should ideally be made part of a sensitivity analysis. This paper presents a formal and flexible approach to such a sensitivity assessment based on local influence (Cook, 1986, Journal of the Royal Statistical Society, Series B 48, 133-169). The influence of perturbing a missing-at-random dropout model in the direction of nonrandom dropout is explored. The method is applied to data from a randomized experiment on the inhibition of testosterone production in rats. PMID:11252620
SENSITIVITY ANALYSIS OF A TPB DEGRADATION RATE MODEL
Crawford, C; Tommy Edwards, T; Bill Wilmarth, B
2006-08-01
A tetraphenylborate (TPB) degradation model for use in aggregating Tank 48 material in Tank 50 is developed in this report. The influential factors for this model are listed as the headings in the table below. A sensitivity study of the predictions of the model over intervals of values for the influential factors affecting the model was conducted. These intervals bound the levels of these factors expected during Tank 50 aggregations. The results from the sensitivity analysis were used to identify settings for the influential factors that yielded the largest predicted TPB degradation rate. Thus, these factor settings are considered as those that yield the ''worst-case'' scenario for TPB degradation rate for Tank 50 aggregation, and, as such they would define the test conditions that should be studied in a waste qualification program whose dual purpose would be the investigation of the introduction of Tank 48 material for aggregation in Tank 50 and the bounding of TPB degradation rates for such aggregations.
An easily implemented static condensation method for structural sensitivity analysis
NASA Technical Reports Server (NTRS)
Gangadharan, S. N.; Haftka, R. T.; Nikolaidis, E.
1990-01-01
A black-box approach to static condensation for sensitivity analysis is presented with illustrative examples of a cube and a car structure. The sensitivity of the structural response with respect to joint stiffness parameter is calculated using the direct method, forward-difference, and central-difference schemes. The efficiency of the various methods for identifying joint stiffness parameters from measured static deflections of these structures is compared. The results indicate that the use of static condensation can reduce computation times significantly and the black-box approach is only slightly less efficient than the standard implementation of static condensation. The ease of implementation of the black-box approach recommends it for use with general-purpose finite element codes that do not have a built-in facility for static condensation.
Multiplexed analysis of chromosome conformation at vastly improved sensitivity
Davies, James O.J.; Telenius, Jelena M.; McGowan, Simon; Roberts, Nigel A.; Taylor, Stephen; Higgs, Douglas R.; Hughes, Jim R.
2015-01-01
Since methods for analysing chromosome conformation in mammalian cells are either low resolution or low throughput and are technically challenging they are not widely used outside of specialised laboratories. We have re-designed the Capture-C method producing a new approach, called next generation (NG) Capture-C. This produces unprecedented levels of sensitivity and reproducibility and can be used to analyse many genetic loci and samples simultaneously. Importantly, high-resolution data can be produced on as few as 100,000 cells and SNPs can be used to generate allele specific tracks. The method is straightforward to perform and should therefore greatly facilitate the task of linking SNPs identified by genome wide association studies with the genes they influence. The complete and detailed protocol presented here, with new publicly available tools for library design and data analysis, will allow most laboratories to analyse chromatin conformation at levels of sensitivity and throughput that were previously impossible. PMID:26595209
Sensitive LC MS quantitative analysis of carbohydrates by Cs+ attachment.
Rogatsky, Eduard; Jayatillake, Harsha; Goswami, Gayotri; Tomuta, Vlad; Stein, Daniel
2005-11-01
The development of a sensitive assay for the quantitative analysis of carbohydrates from human plasma using LC/MS/MS is described in this paper. After sample preparation, carbohydrates were cationized by Cs(+) after their separation by normal phase liquid chromatography on an amino based column. Cesium is capable of forming a quasi-molecular ion [M + Cs](+) with neutral carbohydrate molecules in the positive ion mode of electrospray ionization mass spectrometry. The mass spectrometer was operated in multiple reaction monitoring mode, and transitions [M + 133] --> 133 were monitored (M, carbohydrate molecular weight). The new method is robust, highly sensitive, rapid, and does not require postcolumn addition or derivatization. It is useful in clinical research for measurement of carbohydrate molecules by isotope dilution assay. PMID:16182559
Sensitivity Analysis of Hardwired Parameters in GALE Codes
Geelhood, Kenneth J.; Mitchell, Mark R.; Droppo, James G.
2008-12-01
The U.S. Nuclear Regulatory Commission asked Pacific Northwest National Laboratory to provide a data-gathering plan for updating the hardwired data tables and parameters of the Gaseous and Liquid Effluents (GALE) codes to reflect current nuclear reactor performance. This would enable the GALE codes to make more accurate predictions about the normal radioactive release source term applicable to currently operating reactors and to the cohort of reactors planned for construction in the next few years. A sensitivity analysis was conducted to define the importance of hardwired parameters in terms of each parameter’s effect on the emission rate of the nuclides that are most important in computing potential exposures. The results of this study were used to compile a list of parameters that should be updated based on the sensitivity of these parameters to outputs of interest.
Sensitivity analysis for dynamic systems with time-lags
NASA Astrophysics Data System (ADS)
Rihan, Fathalla A.
2003-02-01
Many problems in bioscience for which observations are reported in the literature can be modelled by suitable functional differential equations incorporating time-lags (other terminology: delays) or memory effects, parameterized by scientifically meaningful constant parameters p or/and variable parameters (for example, control functions) u(t). It is often desirable to have information about the effect on the solution of the dynamic system of perturbing the initial data, control functions, time-lags and other parameters appearing in the model. The main purpose of this paper is to derive a general theory for sensitivity analysis of mathematical models that contain time-lags. In this paper, we use adjoint equations and direct methods to estimate the sensitivity functions when the parameters appearing in the model are not only constants but also variables of time. To illustrate the results, the methodology is applied numerically to an example of a delay differential model.
Transversity and Collins Fragmentation Functions: Towards a New Global Analysis
Anselmino, M.; Boglione, M.; Melis, S.; Prokudin, A.; D'Alesio, U.; Kotzinian, A.; Murgia, F.
2009-08-04
We present an update of a previous global analysis of the experimental data on azimuthal asymmetries in semi-inclusive deep inelastic scattering (SIDIS), from the HERMES and COMPASS Collaborations, and in e{sup +}e{sup -}{yields}h{sub 1}h{sub 2}X processes, from the Belle Collaboration. Compared to the first extraction, a more precise determination of the Collins fragmentation function and the transversity distribution function for u and d quarks is obtained.
A Comparative Analysis of Global Cropping Systems Models and Maps
NASA Astrophysics Data System (ADS)
Anderson, W. B.; You, L.; Wood, S.; Wood-Sichra, U.; Wu, W.
2013-12-01
Agricultural practices have dramatically altered the land cover of the Earth, but the spatial extent and intensity of these practices is often difficult to catalogue. Cropland accounts for nearly 15 million km2 of the Earth's land cover - amounting to 12% of the Earth's ice-free land surface - yet information on the distribution and performance of specific crops is often available only through national or sub-national statistics. While remote sensing products offer spatially disaggregated information, those currently available on a global scale are ill-suited for many applications due to the limited separation of crop types within the area classified as cropland. Recently, however, there have been multiple independent efforts to incorporate the detailed information available from statistical surveys with supplemental spatial information to produce a spatially explicit global dataset specific to individual cropss for the year 2000. While these datasets provide analysts and decision makers with improved information on global cropping systems, the final global cropping maps differ from one another substantially. This study aims to explore and quantify systematic similarities and differences between four major global cropping systems products: the monthly irrigated and rainfed crop areas around the year 2000 (MIRAC2000) dataset, the spatial production allocation model (SPAM), the global agro-ecological zone (GAEZ) dataset, and the dataset developed by Monfreda et al., 2008. The analysis explores not only the final cropping systems maps but also the interdependencies of each product, methodological differences and modeling assumptions, which will provide users with information vital for discerning between datasets in selecting a product appropriate for each intended application.
Low global sensitivity of metabolic rate to temperature in calcified marine invertebrates.
Watson, Sue-Ann; Morley, Simon A; Bates, Amanda E; Clark, Melody S; Day, Robert W; Lamare, Miles; Martin, Stephanie M; Southgate, Paul C; Tan, Koh Siang; Tyler, Paul A; Peck, Lloyd S
2014-01-01
Metabolic rate is a key component of energy budgets that scales with body size and varies with large-scale environmental geographical patterns. Here we conduct an analysis of standard metabolic rates (SMR) of marine ectotherms across a 70° latitudinal gradient in both hemispheres that spanned collection temperatures of 0-30 °C. To account for latitudinal differences in the size and skeletal composition between species, SMR was mass normalized to that of a standard-sized (223 mg) ash-free dry mass individual. SMR was measured for 17 species of calcified invertebrates (bivalves, gastropods, urchins and brachiopods), using a single consistent methodology, including 11 species whose SMR was described for the first time. SMR of 15 out of 17 species had a mass-scaling exponent between 2/3 and 1, with no greater support for a 3/4 rather than a 2/3 scaling exponent. After accounting for taxonomy and variability in parameter estimates among species using variance-weighted linear mixed effects modelling, temperature sensitivity of SMR had an activation energy (Ea) of 0.16 for both Northern and Southern Hemisphere species which was lower than predicted under the metabolic theory of ecology (Ea 0.2-1.2 eV). Northern Hemisphere species, however, had a higher SMR at each habitat temperature, but a lower mass-scaling exponent relative to SMR. Evolutionary trade-offs that may be driving differences in metabolic rate (such as metabolic cold adaptation of Northern Hemisphere species) will have important impacts on species abilities to respond to changing environments. PMID:24036933
Biosphere dose conversion Factor Importance and Sensitivity Analysis
M. Wasiolek
2004-10-15
This report presents importance and sensitivity analysis for the environmental radiation model for Yucca Mountain, Nevada (ERMYN). ERMYN is a biosphere model supporting the total system performance assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis concerns the output of the model, biosphere dose conversion factors (BDCFs) for the groundwater, and the volcanic ash exposure scenarios. It identifies important processes and parameters that influence the BDCF values and distributions, enhances understanding of the relative importance of the physical and environmental processes on the outcome of the biosphere model, includes a detailed pathway analysis for key radionuclides, and evaluates the appropriateness of selected parameter values that are not site-specific or have large uncertainty.
Bi-global Stability Analysis of Compressible Open Cavity Flows
NASA Astrophysics Data System (ADS)
Sun, Yiyang; Taira, Kunihiko; Cattafesta, Louis; Ukeiley, Lawrence
2015-11-01
The effect of compressibility on stability characteristics of rectangular open cavity flows is numerically examined. In our earlier work with two-dimensional direct numerical simulation of open cavity flows, we found that increasing Mach number destabilizes the flow in the subsonic regime but stabilizes the flow in the transonic regime. To further examine the compressibility effect, linear bi-global stability analysis is performed over the same range of Mach numbers to investigate the influence of three-dimensional instabilities in flows over open cavities with length-to-depth ratios of 2 and 6. We identify dominant eigenmodes for varied Mach numbers and spanwise wavelengths with respect to two-dimensional stable and unstable steady states. Over a range of spanwise wavelengths, we reveal the growth/damp rates and frequencies of the dominant global modes. Based on the insights from the present analysis, we compare our findings from global stability analysis with our companion three-dimensional flow control experiments aimed at reducing pressure fluctuation caused by cavity flow unsteadiness. This work was supported by the US Air Force Office of Scientific Research (Grant FA9550-13-1-0091).
Sensitivity Analysis of OECD Benchmark Tests in BISON
Swiler, Laura Painton; Gamble, Kyle; Schmidt, Rodney C.; Williamson, Richard
2015-09-01
This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining core boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.
Simplifying multivariate survival analysis using global score test methodology
NASA Astrophysics Data System (ADS)
Zain, Zakiyah; Aziz, Nazrina; Ahmad, Yuhaniz
2015-12-01
In clinical trials, the main purpose is often to compare efficacy between experimental and control treatments. Treatment comparisons often involve multiple endpoints, and this situation further complicates the analysis of survival data. In the case of tumor patients, endpoints concerning survival times include: times from tumor removal until the first, the second and the third tumor recurrences, and time to death. For each patient, these endpoints are correlated, and the estimation of the correlation between two score statistics is fundamental in derivation of overall treatment advantage. In this paper, the bivariate survival analysis method using the global score test methodology is extended to multivariate setting.
Global convergence analysis of a discrete time nonnegative ICA algorithm.
Ye, Mao
2006-01-01
When the independent sources are known to be nonnegative and well-grounded, which means that they have a nonzero pdf in the region of zero, Oja and Plumbley have proposed a "Nonnegative principal component analysis (PCA)" algorithm to separate these positive sources. Generally, it is very difficult to prove the convergence of a discrete-time independent component analysis (ICA) learning algorithm. However, by using the skew-symmetry property of this discrete-time "Nonnegative PCA" algorithm, if the learning rate satisfies suitable condition, the global convergence of this discrete-time algorithm can be proven. Simulation results are employed to further illustrate the advantages of this theory. PMID:16526495
Hydrological sensitivity to greenhouse gases and aerosols in a global climate model
NASA Astrophysics Data System (ADS)
KvalevâG, Maria Malene; Samset, BjøRn H.; Myhre, Gunnar
2013-04-01
Changes in greenhouse gases and aerosols alter the atmospheric energy budget on different time scales and at different levels in the atmosphere. We study the relationship between global mean precipitation changes, radiative forcing, and surface temperature change since preindustrial times caused by several climate change components (CO2, CH4, sulphate and black carbon (BC) aerosols, and solar forcing) using the National Center for Atmospheric Research Community Earth System Model (CESM1.03). We find a fast response in precipitation due to atmospheric instability that correlates with radiative forcing associated with atmospheric absorption and a slower response caused by changes in surface temperature which correlates with radiative forcing at the top of the atmosphere. In general, global climate models show large differences in climate response to global warming, but here we find a strong relationship between global mean radiative forcing and global mean precipitation changes that is very consistent with other models, indicating that precipitation changes from a particular forcing mechanism are more robust than previously expected. In addition, we look at the precipitation response and relate it to changes in lifetime of atmospheric water vapor (τ). BC aerosols have a significantly larger impact on changes in τ related to surface temperature compared to greenhouse gases, sulphate aerosols, and solar forcing and are the dominating forcing mechanism affecting fast precipitation in this quantity.
GLobal Ocean Data Analysis Project (GLODAP): Data and Analyses
Sabine, C. L.; Key, R. M.; Feely, R. A.; Bullister, J. L.; Millero, F. J.; Wanninkhof, R.; Peng, T. H.; Kozyr, A.
The GLobal Ocean Data Analysis Project (GLODAP) is a cooperative effort to coordinate global synthesis projects funded through NOAA, DOE, and NSF as part of the Joint Global Ocean Flux Study - Synthesis and Modeling Project (JGOFS-SMP). Cruises conducted as part of the World Ocean Circulation Experiment (WOCE), JGOFS, and the NOAA Ocean-Atmosphere Exchange Study (OACES) over the decade of the 1990s have created an important oceanographic database for the scientific community investigating carbon cycling in the oceans. The unified data help to determine the global distributions of both natural and anthropogenic inorganic carbon, including radiocarbon. These estimates provide an important benchmark against which future observational studies will be compared. They also provide tools for the direct evaluation of numerical ocean carbon models. GLODAP information available through CDIAC includes gridded and bottle data, a live server, an interactive atlas that provides access to data plots, and other tools for viewing and interacting with the data. [from http://cdiac.esd.ornl.gov/oceans/glodap/Glopintrod.htm](Specialized Interface)
[Global Atmospheric Chemistry/Transport Modeling and Data-Analysis
NASA Technical Reports Server (NTRS)
Prinn, Ronald G.
1999-01-01
This grant supported a global atmospheric chemistry/transport modeling and data- analysis project devoted to: (a) development, testing, and refining of inverse methods for determining regional and global transient source and sink strengths for trace gases; (b) utilization of these inverse methods which use either the Model for Atmospheric Chemistry and Transport (MATCH) which is based on analyzed observed winds or back- trajectories calculated from these same winds for determining regional and global source and sink strengths for long-lived trace gases important in ozone depletion and the greenhouse effect; (c) determination of global (and perhaps regional) average hydroxyl radical concentrations using inverse methods with multiple "titrating" gases; and (d) computation of the lifetimes and spatially resolved destruction rates of trace gases using 3D models. Important ultimate goals included determination of regional source strengths of important biogenic/anthropogenic trace gases and also of halocarbons restricted by the Montreal Protocol and its follow-on agreements, and hydrohalocarbons now used as alternatives to the above restricted halocarbons.
Rheological Models of Blood: Sensitivity Analysis and Benchmark Simulations
NASA Astrophysics Data System (ADS)
Szeliga, Danuta; Macioł, Piotr; Banas, Krzysztof; Kopernik, Magdalena; Pietrzyk, Maciej
2010-06-01
Modeling of bl